url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 600M
2.05B
| node_id
stringlengths 18
32
| number
int64 2
6.51k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
sequencelengths 0
30
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 3
values | active_lock_reason
float64 | draft
float64 0
1
⌀ | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3625/comments | https://api.github.com/repos/huggingface/datasets/issues/3625/events | https://github.com/huggingface/datasets/issues/3625 | 1,113,017,522 | I_kwDODunzps5CV0yy | 3,625 | Add a metadata field for when source data was produced | {
"avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4",
"events_url": "https://api.github.com/users/davanstrien/events{/privacy}",
"followers_url": "https://api.github.com/users/davanstrien/followers",
"following_url": "https://api.github.com/users/davanstrien/following{/other_user}",
"gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/davanstrien",
"id": 8995957,
"login": "davanstrien",
"node_id": "MDQ6VXNlcjg5OTU5NTc=",
"organizations_url": "https://api.github.com/users/davanstrien/orgs",
"received_events_url": "https://api.github.com/users/davanstrien/received_events",
"repos_url": "https://api.github.com/users/davanstrien/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions",
"type": "User",
"url": "https://api.github.com/users/davanstrien"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"A question to the datasets maintainers: is there a policy about how the set of allowed metadata fields is maintained and expanded?\r\n\r\nMetadata are very important, but defining the standard is always a struggle between allowing exhaustivity without being too complex. Archivists have Dublin Core, open data has https://frictionlessdata.io/, geo has ISO 19139 and INSPIRE, etc. and it's always a mess! I'm not sure we want to dig too much into it, but I'm curious to know if there has been some work on the metadata standard.",
"> Metadata are very important, but defining the standard is always a struggle between allowing exhaustivity without being too complex. Archivists have Dublin Core, open data has [frictionlessdata.io](https://frictionlessdata.io/), geo has ISO 19139 and INSPIRE, etc. and it's always a mess! I'm not sure we want to dig too much into it, but I'm curious to know if there has been some work on the metadata standard.\r\n\r\n\r\nI thought this is a potential issue with adding this field since it might be hard to define what is general enough to be useful for most data vs what becomes very domain-specific. Potentially adding one extra field leads to more and more fields in the future. \r\n\r\nAnother issue is that there are some metadata standards around data i.e. [datacite](https://schema.datacite.org/meta/kernel-4.4/), but not many aimed explicitly at ML data afaik. Some of the discussions around metadata for ML are also more focused on versioning/managing data in production environments. My thinking is that here, some reference to the time of production would also often be tracked/relevant, i.e. for triggering model training, so having this information available in the hub would also help address this use case. ",
"Adding a relevant paper related to this topic: [TimeLMs: Diachronic Language Models from Twitter](https://arxiv.org/abs/2202.03829)\r\n\r\n",
"Related: https://github.com/huggingface/datasets/issues/3877",
"Also related: the [Data Catalog Vocabulary - DCAT](https://www.w3.org/TR/vocab-dcat/) standard will be discussed in a new Working Group at the W3C: https://www.w3.org/2022/06/dx-wg-charter.html"
] | "2022-01-24T18:52:39Z" | "2022-06-28T13:54:49Z" | null | MEMBER | null | null | null | **Is your feature request related to a problem? Please describe.**
The current problem is that information about when source data was produced is not easily visible. Though there are a variety of metadata fields available in the dataset viewer, time period information is not included. This feature request suggests making metadata relating to the time that the underlying *source* data was produced more prominent and outlines why this specific information is of particular importance, both in domain-specific historic research and more broadly.
**Describe the solution you'd like**
There are a variety of metadata fields exposed in the dataset viewer (license, task categories, etc.) These fields make this metadata more prominent both for human users and as potentially machine-actionable information (for example, through the API). I would propose to add a metadata field that says when some underlying data was produced. For example, a dataset would be labelled as being produced between `1800-1900`.
**Describe alternatives you've considered**
This information is sometimes available in the Datacard or a paper describing the dataset. However, it's often not that easy to identify or extract this information, particularly if you want to use this field as a filter to identify relevant datasets.
**Additional context**
I believe this feature is relevant for a number of reasons:
- Increasingly, there is an interest in using historical data for training language models (for example, https://huggingface.co/dbmdz/bert-base-historic-dutch-cased), and datasets to support this task (for example, https://huggingface.co/datasets/bnl_newspapers). For these datasets, indicating the time periods covered is particularly relevant.
- More broadly, time is likely a common source of domain drift. Datasets of movie reviews from the 90s may not work well for recent movie reviews. As the documentation and long-term management of ML data become more of a priority, quickly understanding the time when the underlying text (or other data types) is arguably more important.
- time-series data: datasets are adding more support for time series data. Again, the periods covered might be particularly relevant here.
**open questions**
- I think some of my points above apply not only to the underlying data but also to annotations. As a result, there could also be an argument for encoding this information somewhere. However, I would argue (but could be persuaded otherwise) that this is probably less important for filtering. This type of context is already addressed in the datasheets template and often requires more narrative to discuss.
- what level of granularity would make sense for this? e.g. assigning a decade, century or year?
- how to encode this information? What formatting makes sense
- what specific time to encode; a data range? (mean, modal, min, max value?)
This is a slightly amorphous feature request - I would be happy to discuss further/try and propose a more concrete solution if this seems like something that could be worth considering. I realise this might also touch on other parts of the 🤗 hubs ecosystem. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3625/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3625/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5254/comments | https://api.github.com/repos/huggingface/datasets/issues/5254/events | https://github.com/huggingface/datasets/pull/5254 | 1,452,600,088 | PR_kwDODunzps5DE47u | 5,254 | typo | {
"avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4",
"events_url": "https://api.github.com/users/WrRan/events{/privacy}",
"followers_url": "https://api.github.com/users/WrRan/followers",
"following_url": "https://api.github.com/users/WrRan/following{/other_user}",
"gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WrRan",
"id": 7569098,
"login": "WrRan",
"node_id": "MDQ6VXNlcjc1NjkwOTg=",
"organizations_url": "https://api.github.com/users/WrRan/orgs",
"received_events_url": "https://api.github.com/users/WrRan/received_events",
"repos_url": "https://api.github.com/users/WrRan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WrRan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WrRan"
} | [] | closed | false | null | [] | null | [] | "2022-11-17T02:39:57Z" | "2022-11-18T10:53:45Z" | "2022-11-18T10:53:45Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5254.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5254",
"merged_at": "2022-11-18T10:53:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5254.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5254"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5254/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5254/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5520/comments | https://api.github.com/repos/huggingface/datasets/issues/5520/events | https://github.com/huggingface/datasets/issues/5520 | 1,578,417,074 | I_kwDODunzps5eFLuy | 5,520 | ClassLabel.cast_storage raises TypeError when called on an empty IntegerArray | {
"avatar_url": "https://avatars.githubusercontent.com/u/6591505?v=4",
"events_url": "https://api.github.com/users/marioga/events{/privacy}",
"followers_url": "https://api.github.com/users/marioga/followers",
"following_url": "https://api.github.com/users/marioga/following{/other_user}",
"gists_url": "https://api.github.com/users/marioga/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marioga",
"id": 6591505,
"login": "marioga",
"node_id": "MDQ6VXNlcjY1OTE1MDU=",
"organizations_url": "https://api.github.com/users/marioga/orgs",
"received_events_url": "https://api.github.com/users/marioga/received_events",
"repos_url": "https://api.github.com/users/marioga/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marioga/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marioga/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marioga"
} | [] | closed | false | null | [] | null | [] | "2023-02-09T18:46:52Z" | "2023-02-12T11:17:18Z" | "2023-02-12T11:17:18Z" | CONTRIBUTOR | null | null | null | ### Describe the bug
`ClassLabel.cast_storage` raises `TypeError` when called on an empty `IntegerArray`.
### Steps to reproduce the bug
Minimal steps:
```python
import pyarrow as pa
from datasets import ClassLabel
ClassLabel(names=['foo', 'bar']).cast_storage(pa.array([], pa.int64()))
```
In practice, this bug arises in situations like the one below:
```python
from datasets import ClassLabel, Dataset, Features, Sequence
dataset = Dataset.from_dict({'labels': [[], []]}, features=Features({'labels': Sequence(ClassLabel(names=['foo', 'bar']))}))
# this raises TypeError
dataset.map(batched=True, batch_size=1)
```
### Expected behavior
`ClassLabel.cast_storage` should return an empty Int64Array.
### Environment info
- `datasets` version: 2.9.1.dev0
- Platform: Linux-4.15.0-1032-aws-x86_64-with-glibc2.27
- Python version: 3.10.6
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5520/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5520/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1359/comments | https://api.github.com/repos/huggingface/datasets/issues/1359/events | https://github.com/huggingface/datasets/pull/1359 | 760,055,969 | MDExOlB1bGxSZXF1ZXN0NTM0OTUxMTgy | 1,359 | Add JNLPBA | {
"avatar_url": "https://avatars.githubusercontent.com/u/17855740?v=4",
"events_url": "https://api.github.com/users/edugp/events{/privacy}",
"followers_url": "https://api.github.com/users/edugp/followers",
"following_url": "https://api.github.com/users/edugp/following{/other_user}",
"gists_url": "https://api.github.com/users/edugp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/edugp",
"id": 17855740,
"login": "edugp",
"node_id": "MDQ6VXNlcjE3ODU1NzQw",
"organizations_url": "https://api.github.com/users/edugp/orgs",
"received_events_url": "https://api.github.com/users/edugp/received_events",
"repos_url": "https://api.github.com/users/edugp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/edugp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edugp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/edugp"
} | [] | closed | false | null | [] | null | [] | "2020-12-09T06:48:51Z" | "2020-12-10T14:24:36Z" | "2020-12-10T14:24:36Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1359.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1359",
"merged_at": "2020-12-10T14:24:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1359.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1359"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1359/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1359/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/4542 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4542/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4542/comments | https://api.github.com/repos/huggingface/datasets/issues/4542/events | https://github.com/huggingface/datasets/issues/4542 | 1,280,269,445 | I_kwDODunzps5MT1yF | 4,542 | [to_tf_dataset] Use Feather for better compatibility with TensorFlow ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | null | [] | null | [
"This has so much potential to be great! Also I think you tagged some poor random dude on the internet whose name is also Joao, lol, edited that for you! ",
"cc @sayakpaul here too, since he was interested in our new approaches to converting datasets!",
"Noted and I will look into the thread in detail tomorrow once I log back in. ",
"@lhoestq I have used TFRecords with `tf.data` for both vision and text and I can say that they are quite performant. I haven't worked with Feather yet as similarly as I have with TFRecords. If you haven't started the benchmarking script yet, I can prepare a Colab notebook that loads Feather files, converts them into a `tf.data` pipeline, and does some basic preprocessing. \r\n\r\nBut in my limited understanding, Feather might be better suited for CSV files. Not yet sure if it's good for modalities like images. ",
"> Not yet sure if it's good for modalities like images.\r\n\r\nWe store images pretty much the same way as tensorflow_datasets (i.e. storing the encoded image bytes, or a path to the local image, so that the image can be decoded on-the-fly), so as long as we use something similar as TFDS for image decoding it should be ok",
"So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly? But it introduces an I/O redundancy of having to read the images every time.\r\n\r\nWith caching it could be somewhat mitigated but it's not a good solution for bigger image datasets. ",
"> So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly?\r\n\r\nhopefully yes :) \r\n\r\nI double-checked the TFDS source code and they always save the bytes actually, not the path. Anyway we'll see if we run into issues or not (as a first step we can require the bytes to be in the feather file)",
"Yes. For images, TFDS actually prepares TFRecords first for encoding and then reuses them for every subsequent call. ",
"@lhoestq @Rocketknight1 I worked on [this PoC](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59) that\r\n\r\n* Creates Feather files from a medium resolution dataset (`tf_flowers`).\r\n* Explores different options with TensorFlow IO to load the Feather files. \r\n\r\nI haven't benchmarked those different options yet. There's also a gotcha that I have noted in the PoC. I hope it gets us started but I'm sorry if this is redundant. ",
"Cool thanks ! If I understand correctly in your PoC you store the flattened array of pixels in the feather file. This will take a lot of disk space.\r\n\r\nMaybe we could just save the encoded bytes and let users apply a `map` to decode/transform them into the format they need for training ? Users can use tf.image to do so for example",
"@lhoestq this is what I tried:\r\n\r\n```py\r\ndef read_image(path):\r\n with open(path, \"rb\") as f:\r\n return f.read()\r\n\r\n\r\ntotal_images_written = 0\r\n\r\nfor step in tqdm.tnrange(int(math.ceil(len(image_paths) / batch_size))):\r\n batch_image_paths = image_paths[step * batch_size : (step + 1) * batch_size]\r\n batch_image_labels = all_integer_labels[step * batch_size : (step + 1) * batch_size]\r\n\r\n data = [read_image(path) for path in batch_image_paths]\r\n table = pa.Table.from_arrays([data, batch_image_labels], [\"data\", \"labels\"])\r\n write_feather(table, f\"/tmp/flowers_feather_{step}.feather\", chunksize=chunk_size)\r\n total_images_written += len(batch_image_paths)\r\n print(f\"Total images written: {total_images_written}.\")\r\n\r\n del data\r\n```\r\n\r\nI got the feather files done (no resizing required as you can see):\r\n\r\n```sh\r\nls -lh /tmp/*.feather\r\n\r\n-rw-r--r-- 1 sayakpaul wheel 64M Jun 24 09:28 /tmp/flowers_feather_0.feather\r\n-rw-r--r-- 1 sayakpaul wheel 59M Jun 24 09:28 /tmp/flowers_feather_1.feather\r\n-rw-r--r-- 1 sayakpaul wheel 51M Jun 24 09:28 /tmp/flowers_feather_2.feather\r\n-rw-r--r-- 1 sayakpaul wheel 45M Jun 24 09:28 /tmp/flowers_feather_3.feather\r\n```\r\n\r\nNow there seems to be a problem with `tfio.arrow`:\r\n\r\n```py\r\nimport tensorflow_io.arrow as arrow_io\r\n\r\n\r\ndataset = arrow_io.ArrowFeatherDataset(\r\n [\"/tmp/flowers_feather_0.feather\"],\r\n columns=(0, 1),\r\n output_types=(tf.string, tf.int64),\r\n output_shapes=([], []),\r\n batch_mode=\"auto\",\r\n)\r\n\r\nprint(dataset.element_spec) \r\n```\r\n\r\nPrints:\r\n\r\n```\r\n(TensorSpec(shape=(None,), dtype=tf.string, name=None),\r\n TensorSpec(shape=(None,), dtype=tf.int64, name=None))\r\n```\r\n\r\nBut when I do `sample = next(iter(dataset))` it goes into:\r\n\r\n```py\r\nInternalError Traceback (most recent call last)\r\nInput In [30], in <cell line: 1>()\r\n----> 1 sample = next(iter(dataset))\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py:766, in OwnedIterator.__next__(self)\r\n 764 def __next__(self):\r\n 765 try:\r\n--> 766 return self._next_internal()\r\n 767 except errors.OutOfRangeError:\r\n 768 raise StopIteration\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py:749, in OwnedIterator._next_internal(self)\r\n 746 # TODO(b/77291417): This runs in sync mode as iterators use an error status\r\n 747 # to communicate that there is no more data to iterate over.\r\n 748 with context.execution_mode(context.SYNC):\r\n--> 749 ret = gen_dataset_ops.iterator_get_next(\r\n 750 self._iterator_resource,\r\n 751 output_types=self._flat_output_types,\r\n 752 output_shapes=self._flat_output_shapes)\r\n 754 try:\r\n 755 # Fast path for the case `self._structure` is not a nested structure.\r\n 756 return self._element_spec._from_compatible_tensor_list(ret) # pylint: disable=protected-access\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/ops/gen_dataset_ops.py:3017, in iterator_get_next(iterator, output_types, output_shapes, name)\r\n 3015 return _result\r\n 3016 except _core._NotOkStatusException as e:\r\n-> 3017 _ops.raise_from_not_ok_status(e, name)\r\n 3018 except _core._FallbackException:\r\n 3019 pass\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:7164, in raise_from_not_ok_status(e, name)\r\n 7162 def raise_from_not_ok_status(e, name):\r\n 7163 e.message += (\" name: \" + name if name is not None else \"\")\r\n-> 7164 raise core._status_to_exception(e) from None\r\n\r\nInternalError: Invalid: INVALID_ARGUMENT: arrow data type 0x7ff9899d8038 is not supported: Type error: Arrow data type is not supported [Op:IteratorGetNext]\r\n```\r\n\r\nSome additional notes:\r\n\r\n* I can actually decode an image encoded with `read_image()` (shown earlier):\r\n\r\n ```py\r\n sample_image_path = image_paths[0]\r\n encoded_image = read_image(sample_image_path)\r\n image = tf.image.decode_png(encoded_image, 3)\r\n print(image.shape)\r\n ```\r\n\r\n* If the above `tf.data.Dataset` object would have succeeded my plan was to just map the decoder like so:\r\n\r\n ```py\r\n autotune = tf.data.AUTOTUNE\r\n dataset = dataset.map(lambda x, y: (tf.image.decode_png(x, 3), y), num_parallel_calls=autotune)\r\n ```",
"@lhoestq I think I was able to make it work in the way you were envisioning. Here's the PoC:\r\nhttps://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb\r\n\r\nSome details:\r\n\r\n* I am currently serializing the images as strings with `base64`). In comparison to the flattened arrays as before, the size of the individual feather files has reduced (144 MB -> 85 MB, largest).\r\n* When decoding, I am first decoding the base64 string and then decoding that string (with `tf.io.decode_base64`) as an image with `tf.image.decode_png()`. \r\n* The entire workflow (from generating the Feather files to loading them and preparing the batched `tf.data` pipeline) involves the following libraries: `pyarraow`, `tensorflow-io`, and `tensorflow`. \r\n\r\nCc: @Rocketknight1 @gante ",
"Cool thanks ! Too bad the Arrow binary type doesn't seem to be supported in `arrow_io.ArrowFeatherDataset` :/ We would also need it to support Arrow struct type. Indeed images in `datasets` are represented using an Arrow type\r\n```python\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n```\r\nnot sure yet how hard it is to support this though.\r\n\r\nChanging the typing on our side would create concerning breaking changes, that's why it would be awesome if it could work using these types",
"If the ArrowFeatherDataset doesn't yet support it, I guess our hands are a bit tied at the moment. \r\n\r\nIIUC, in my [latest PoC notebook](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n\r\n```\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n``` \r\n\r\nIn that case, `pa.binary()` isn't yet supported.",
"> IIUC, in my [latest PoC notebook](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n> \r\n> pa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n\r\nYea because that's the data format we're using. If we were to use base64, then we would have to process the full dataset to convert it, which can take some time. Converting to TFRecords would be simpler than converting to base64 in Feather files.\r\n\r\nMaybe it would take too much time to be worth exploring, but according to https://github.com/tensorflow/io/issues/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset. What do you think ? Any other alternative in mind ?",
"> Maybe it would take too much time to be worth exploring, but according to https://github.com/tensorflow/io/issues/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset.\r\n\r\nShould be possible as per the comment but there hasn't been any progress and it's been more than a year. \r\n\r\n> If we were to use base64, then we would have to process the full dataset to convert it, which can take some time.\r\n\r\nI don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from. \r\n\r\n> What do you think ? Any other alternative in mind ?\r\n\r\nTFRecords since the TensorFlow ecosystem has developed good support for it over the years. ",
"> I don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from.\r\n\r\nUsers already have a copy of the dataset in Arrow format (we can change this to Feather). So to load the Arrow/feather files to a TF dataset we need TF IO or something like that. Otherwise the user has to convert all the files from Arrow to TFRecords to use TF data efficiently. But the conversion needs resources: CPU, disk, time. Converting the images to base64 require the same sort of resources.\r\n\r\nSo the issue we're trying to tackle is how to load the Arrow data in TF without having to convert anything ^^",
"Yeah, it looks like in its current state the tfio support for `Feather` is incomplete, so we'd end up having to write a lot of it, or do a conversion that defeats the whole point (because if we're going to convert the whole dataset we might as well convert to `TFRecord`).",
"Understood @lhoestq. Thanks for explaining!\r\n\r\nAgreed with @Rocketknight1. ",
"@lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and/or load samples with multiprocessing to improve throughput. Do you have any preferences there?",
"> @lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and/or load samples with multiprocessing to improve throughput. Do you have any preferences there?\r\n\r\nHappy to take part there @Rocketknight1.",
"If `to_tf_dataset` can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?",
"@lhoestq why one would convert to TFRecords after unbatching? ",
"> If to_tf_dataset can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?\r\n\r\nSort of! A `tf.data.Dataset` is more like an iterator, and does not support sample indexing. `to_tf_dataset()` creates an iterator, but to convert that to `TFRecord`, the user would have to iterate over the whole thing and manually save the stream of samples to files. ",
"Someone would like to try to dive into tfio to fix this ? Sounds like a good opportunity to learn what are the best ways to load a dataset for TF, and also the connections between Arrow and TF.\r\n\r\nIf we can at least have the Arrow `binary` type working for TF that would be awesome already (issue https://github.com/tensorflow/io/issues/1361)\r\n\r\nalso cc @nateraw in case you'd be interested ;)",
"> Sounds like a good opportunity to learn what are the best ways to load a dataset for TF\r\n\r\nThe recommended way would likely be a combination of TFRecords and `tf.data`. \r\n\r\nExploring the connection between Arrow and TensorFlow is definitely worth pursuing though. But I am not sure about the implications of storing images in a format supported by Arrow. I guess we'll know more once we have at least figured out the support for `binary` type for TFIO. I will spend some time on it and keep this thread updated. ",
"I am currently working on a fine-tuning notebook for the TFSegFormer model (Semantic Segmentation). The resolution is high for both the input images and the labels - (512, 512, 3). Here's the [Colab Notebook](https://colab.research.google.com/drive/1jAtR7Z0lYX6m6JsDI5VByh5vFaNhHIbP?usp=sharing) (it's a WIP so please bear that in mind).\r\n\r\nI think the current implementation of `to_tf_dataset()` does create a bottleneck here since the GPU utilization is quite low. ",
"Here's a notebook showing the performance difference: https://colab.research.google.com/gist/sayakpaul/d7ca67c90beb47e354942c9d8c0bd8ef/scratchpad.ipynb. \r\n\r\nNote that I acknowledge that it's not an apples-to-apples comparison in many aspects (the dataset isn't the same, data serialization format isn't the same, etc.) but this is the best I could do. ",
"Thanks ! I think the speed difference can be partly explained: you use ds.shuffle in your dataset, which is an exact shuffling (compared to TFDS which does buffer shuffling): it slows down query time by 2x to 10x since it has to play with data that are not contiguous.\r\n\r\nThe rest of the speed difference seems to be caused by image decoding (from 330µs/image to 30ms/image)",
"Fair enough. Can do one without shuffling too. But it's an important one to consider I guess. "
] | "2022-06-22T14:42:00Z" | "2022-10-11T08:45:45Z" | null | MEMBER | null | null | null | To have better performance in TensorFlow, it is important to provide lists of data files in supported formats. For example sharded TFRecords datasets are extremely performant. This is because tf.data can better leverage parallelism in this case, and load one file at a time in memory.
It seems that using `tensorflow_io` we could have something similar for `to_tf_dataset` if we provide sharded Feather files: https://www.tensorflow.org/io/api_docs/python/tfio/arrow/ArrowFeatherDataset
Feather is a format almost equivalent to the Arrow IPC Stream format we're using in `datasets`: Feather V2 is equivalent to Arrow IPC File format, which is an extension of the stream format (it has an extra footer). Therefore we could store datasets as Feather instead of Arrow IPC Stream format without breaking the whole library.
Here are a few points to explore
- [ ] check the performance of ArrowFeatherDataset in tf.data
- [ ] check what would change if we were to switch to Feather if needed, in particular check that those are fine: memory mapping, typing, writing, reading to python objects, etc.
We would also need to implement sharding when loading a dataset (this will be done anyway for #546)
cc @Rocketknight1 @gante feel free to comment in case I missed anything !
I'll share some files and scripts, so that we can benchmark performance of Feather files with tf.data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4542/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4542/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4597 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4597/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4597/comments | https://api.github.com/repos/huggingface/datasets/issues/4597/events | https://github.com/huggingface/datasets/issues/4597 | 1,288,672,007 | I_kwDODunzps5Mz5MH | 4,597 | Streaming issue for financial_phrasebank | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [
{
"color": "8B51EF",
"default": false,
"description": "",
"id": 4069435429,
"name": "hosted-on-google-drive",
"node_id": "LA_kwDODunzps7yjqgl",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hosted-on-google-drive"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"cc @huggingface/datasets: it seems like https://www.researchgate.net/ is flaky for datasets hosting (I put the \"hosted-on-google-drive\" tag since it's the same kind of issue I think)",
"Let's see if their license allows hosting their data on the Hub.",
"License is Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0).\r\n\r\nWe can host their data on the Hub."
] | "2022-06-29T12:45:43Z" | "2022-07-01T09:29:36Z" | "2022-07-01T09:29:36Z" | MEMBER | null | null | null | ### Link
https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_allagree/train
### Description
As reported by a community member using [AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/5#62bc217436d0e5d316a768f0), there seems to be a problem streaming this dataset:
```
Server error
Status code: 400
Exception: Exception
Message: Give up after 5 attempts with ConnectionError
```
### Owner
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4597/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4597/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4897/comments | https://api.github.com/repos/huggingface/datasets/issues/4897/events | https://github.com/huggingface/datasets/issues/4897 | 1,351,784,727 | I_kwDODunzps5QkpkX | 4,897 | datasets generate large arrow file | {
"avatar_url": "https://avatars.githubusercontent.com/u/18533904?v=4",
"events_url": "https://api.github.com/users/osayes/events{/privacy}",
"followers_url": "https://api.github.com/users/osayes/followers",
"following_url": "https://api.github.com/users/osayes/following{/other_user}",
"gists_url": "https://api.github.com/users/osayes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osayes",
"id": 18533904,
"login": "osayes",
"node_id": "MDQ6VXNlcjE4NTMzOTA0",
"organizations_url": "https://api.github.com/users/osayes/orgs",
"received_events_url": "https://api.github.com/users/osayes/received_events",
"repos_url": "https://api.github.com/users/osayes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osayes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osayes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osayes"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi ! The cache files are the results of all the transforms you applied to the dataset using `map` for example.\r\nDid you run a transform that could potentially blow up the size of the dataset ?",
"@lhoestq,\r\nI don't remember, but I can't imagine what kind of transform may generate data that grow over 200 times in size. \r\nI think maybe it doesn' matter, it's just cache after all."
] | "2022-08-26T05:51:16Z" | "2022-09-18T05:07:52Z" | "2022-09-18T05:07:52Z" | NONE | null | null | null | Checking the large file in disk, and found the large cache file in the cifar10 data directory:
![image](https://user-images.githubusercontent.com/18533904/186830449-ba96cdeb-0fe8-4543-994d-2abe7145933f.png)
As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be some problems here. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4897/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4897/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6290/comments | https://api.github.com/repos/huggingface/datasets/issues/6290/events | https://github.com/huggingface/datasets/issues/6290 | 1,935,629,679 | I_kwDODunzps5zX11v | 6,290 | Incremental dataset (e.g. `.push_to_hub(..., append=True)`) | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Yea I think waiting for #6269 would be best, or branching from it. For reference, this [PR](https://github.com/LAION-AI/Discord-Scrapers/pull/2) is progressing pretty well which will do similar using the hf hub for our LAION dataset bot https://github.com/LAION-AI/Discord-Scrapers/pull/2. "
] | "2023-10-10T15:18:03Z" | "2023-10-13T16:05:26Z" | null | CONTRIBUTOR | null | null | null | ### Feature request
Have the possibility to do `ds.push_to_hub(..., append=True)`.
### Motivation
Requested in this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/3#65252597c4edc168202a5eaa) and
this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/4#6524f675c9607bdffb208d8f). Discussed internally on [slack](https://huggingface.slack.com/archives/C02EMARJ65P/p1696950642610639?thread_ts=1690554266.830949&cid=C02EMARJ65P).
### Your contribution
What I suggest to do for parquet datasets is to use `CommitOperationCopy` + `CommitOperationDelete` from `huggingface_hub`:
1. list files
2. copy files from parquet-0001-of-0004 to parquet-0001-of-0005
3. delete files like parquet-0001-of-0004
4. generate + add last parquet file parquet-0005-of-0005
=> make a single commit with all commit operations at once
I think it should be quite straightforward to implement. Happy to review a PR (maybe conflicting with the ongoing "1 commit push_to_hub" PR https://github.com/huggingface/datasets/pull/6269) | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6290/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6290/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4468/comments | https://api.github.com/repos/huggingface/datasets/issues/4468/events | https://github.com/huggingface/datasets/pull/4468 | 1,266,715,742 | PR_kwDODunzps45bERK | 4,468 | Generalize tutorials for audio and vision | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-06-09T22:00:44Z" | "2022-06-14T16:22:02Z" | "2022-06-14T16:12:00Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4468.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4468",
"merged_at": "2022-06-14T16:12:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4468.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4468"
} | This PR updates the tutorials to be more generalizable to all modalities. After reading the tutorials, a user should be able to load any type of dataset, know how to index into and slice a dataset, and do the most basic/common type of preprocessing (tokenization, resampling, applying transforms) depending on their dataset.
Other changes include:
- Removed the sections about a dataset's metadata, features, and columns because we cover this in an earlier tutorial about inspecting the `DatasetInfo` through the dataset builder.
- Separate the sharing dataset tutorial into two sections: (1) uploading via the web interface and (2) using the `huggingface_hub` library.
- Renamed some tutorials in the TOC to be more clear and specific.
- Added more text to nudge users towards joining the community and asking questions on the forums.
- If it's okay with everyone, I'd also like to remove the section about loading and using metrics since we have the `evaluate` docs now.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4468/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4468/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/294/comments | https://api.github.com/repos/huggingface/datasets/issues/294/events | https://github.com/huggingface/datasets/issues/294 | 643,181,179 | MDU6SXNzdWU2NDMxODExNzk= | 294 | Cannot load arxiv dataset on MacOS? | {
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JohnGiorgi",
"id": 8917831,
"login": "JohnGiorgi",
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JohnGiorgi"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | [
"I couldn't replicate this issue on my macbook :/\r\nCould you try to play with different encodings in `with open(path, encoding=...) as f` in scientific_papers.py:L108 ?",
"I was able to track down the file causing the problem by adding the following to `scientific_papers.py` (starting at line 116):\r\n\r\n```python\r\n from json import JSONDecodeError\r\n try:\r\n d = json.loads(line)\r\n summary = \"\\n\".join(d[\"abstract_text\"])\r\n except JSONDecodeError:\r\n print(path, line)\r\n```\r\n\r\n\r\n\r\nFor me it was at: `/Users/johngiorgi/.cache/huggingface/datasets/f87fd498c5003cbe253a2af422caa1e58f87a4fd74cb3e67350c635c8903b259/arxiv-dataset/train.txt` with `\"article_id\": \"1407.3051\"`.\r\n\r\nNot really 100% sure at the moment, but it looks like this specific substring from `\"article_text\"` may be causing the problem?\r\n\r\n```\r\n\"after the missing - mass scale adjustment , the validity of the corrections was tested in the @xmath85 productions at 1.69 gev/@xmath1 . in fig . [\", \"fig : calibrations ] ( a ) , we show the missing - mass spectrum in the @xmath86 region in the @xmath87 reaction at 1.69 gev/@xmath1 . a fitting result with a lorentzian function for the @xmath86 ( dashed line ) and the three - body phas\r\n```\r\n\r\nperhaps because it appears to be truncated. I (think) I can recreate the problem by doing the following:\r\n\r\n```python\r\nimport json\r\n\r\n# A minimal example of the json file that causes the error\r\ninvalid_json = '{\"article_id\": \"1407.3051\", \"article_text\": [\"the missing - mass resolution was obtained to be 2.8 @xmath3 0.1 mev/@xmath4 ( fwhm ) , which corresponds to the missing - mass resolution of 3.2 @xmath3 0.2 mev/@xmath4 ( fwhm ) at the @xmath6 cusp region in the @xmath0 reaction .\", \"this resolution is at least by a factor of 2 better than the previous measurement with the same reaction ( [email protected] mev/@xmath4 in @xmath84 ) @xcite .\", \"after the missing - mass scale adjustment , the validity of the corrections was tested in the @xmath85 productions at 1.69 gev/@xmath1 . in fig . [\", \"fig : calibrations ] ( a ) , we show the missing - mass spectrum in the @xmath86 region in the @xmath87 reaction at 1.69 gev/@xmath1 . a fitting result with a lorentzian function for the @xmath86 ( dashed line ) and the three - body phas' \r\n# The line of code from `scientific_papers.py` which appears to cause the error\r\njson.loads(invalid_json)\r\n```\r\n\r\nThis is as far as I get before I am stumped.",
"I just checked inside `train.txt` and this line isn't truncated for me (line 163577).\r\nCould you try to clear your cache and re-download the dataset ?",
"Ah the turn-it-off-turn-it-on again solution! That did it, thanks a lot :) "
] | "2020-06-22T15:46:55Z" | "2020-06-30T15:25:10Z" | "2020-06-30T15:25:10Z" | CONTRIBUTOR | null | null | null | I am having trouble loading the `"arxiv"` config from the `"scientific_papers"` dataset on MacOS. When I try loading the dataset with:
```python
arxiv = nlp.load_dataset("scientific_papers", "arxiv")
```
I get the following stack trace:
```bash
JSONDecodeError Traceback (most recent call last)
<ipython-input-2-8e00c55d5a59> in <module>
----> 1 arxiv = nlp.load_dataset("scientific_papers", "arxiv")
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
522 download_mode=download_mode,
523 ignore_verifications=ignore_verifications,
--> 524 save_infos=save_infos,
525 )
526
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
431 self._download_and_prepare(
--> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
433 )
434 # Sync info
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
481 try:
482 # Prepare split will record examples associated to the split
--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)
484 except OSError:
485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/builder.py in _prepare_split(self, split_generator)
662
663 generator = self._generate_examples(**split_generator.gen_kwargs)
--> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
665 example = self.info.features.encode_example(record)
666 writer.write(example)
~/miniconda3/envs/t2t/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)
1106 fp_write=getattr(self.fp, 'write', sys.stderr.write))
1107
-> 1108 for obj in iterable:
1109 yield obj
1110 # Update and possibly print the progressbar.
~/miniconda3/envs/t2t/lib/python3.7/site-packages/nlp/datasets/scientific_papers/107a416c0e1958cb846f5934b5aae292f7884a5b27e86af3f3ef1a093e058bbc/scientific_papers.py in _generate_examples(self, path)
114 # "section_names": list[str], list of section names.
115 # "sections": list[list[str]], list of sections (list of paragraphs)
--> 116 d = json.loads(line)
117 summary = "\n".join(d["abstract_text"])
118 # In original paper, <S> and </S> are not used in vocab during training
~/miniconda3/envs/t2t/lib/python3.7/json/__init__.py in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
346 parse_int is None and parse_float is None and
347 parse_constant is None and object_pairs_hook is None and not kw):
--> 348 return _default_decoder.decode(s)
349 if cls is None:
350 cls = JSONDecoder
~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in decode(self, s, _w)
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
~/miniconda3/envs/t2t/lib/python3.7/json/decoder.py in raw_decode(self, s, idx)
351 """
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
355 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Unterminated string starting at: line 1 column 46983 (char 46982)
163502 examples [02:10, 2710.68 examples/s]
```
I am not sure how to trace back to the specific JSON file that has the "Unterminated string". Also, I do not get this error on colab so I suspect it may be MacOS specific. Copy pasting the relevant lines from `transformers-cli env` below:
- Platform: Darwin-19.5.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
Any ideas? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/294/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/294/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1265 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1265/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1265/comments | https://api.github.com/repos/huggingface/datasets/issues/1265/events | https://github.com/huggingface/datasets/pull/1265 | 758,687,223 | MDExOlB1bGxSZXF1ZXN0NTMzODE4NjY0 | 1,265 | Add CovidQA dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/4341867?v=4",
"events_url": "https://api.github.com/users/olinguyen/events{/privacy}",
"followers_url": "https://api.github.com/users/olinguyen/followers",
"following_url": "https://api.github.com/users/olinguyen/following{/other_user}",
"gists_url": "https://api.github.com/users/olinguyen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/olinguyen",
"id": 4341867,
"login": "olinguyen",
"node_id": "MDQ6VXNlcjQzNDE4Njc=",
"organizations_url": "https://api.github.com/users/olinguyen/orgs",
"received_events_url": "https://api.github.com/users/olinguyen/received_events",
"repos_url": "https://api.github.com/users/olinguyen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/olinguyen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/olinguyen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/olinguyen"
} | [] | closed | false | null | [] | null | [
"It seems to share the same name as this dataset: https://openreview.net/forum?id=JENSKEEzsoU",
"> It seems to share the same name as this dataset: https://openreview.net/forum?id=JENSKEEzsoU\r\n\r\nyou're right it can be confusing. I'll add the organization/research group for clarity: `covid_qa_castorini`. I added the dataset you shared as `covid_qa_deepset` in another PR (#1182) ",
"Thanks for avoiding the name collision !"
] | "2020-12-07T17:06:51Z" | "2020-12-08T17:02:26Z" | "2020-12-08T17:02:26Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1265.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1265",
"merged_at": "2020-12-08T17:02:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1265.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1265"
} | This PR adds CovidQA, a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle’s COVID-19 Open Research Dataset Challenge.
Link to the paper: https://arxiv.org/pdf/2004.11339.pdf
Link to the homepage: https://covidqa.ai | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1265/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1265/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3893/comments | https://api.github.com/repos/huggingface/datasets/issues/3893/events | https://github.com/huggingface/datasets/pull/3893 | 1,166,551,684 | PR_kwDODunzps40TmxB | 3,893 | Add default branch for doc building | {
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sgugger",
"id": 35901082,
"login": "sgugger",
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"repos_url": "https://api.github.com/users/sgugger/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sgugger"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3893). All of your documentation changes will be reflected on that endpoint.",
"Yes! And when we discovered on the Transformers side that this check fails on the GitHub actions, we added a config attribute to have a default. Setting in Transformers fixed the issue of the doc being deployed to main, so porting the fix here too :-)"
] | "2022-03-11T15:24:27Z" | "2022-03-11T15:34:35Z" | "2022-03-11T15:34:34Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3893.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3893",
"merged_at": "2022-03-11T15:34:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3893.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3893"
} | Since other libraries use `main` as their default branch and it's now the standard default, you have to specify a different name in the doc config if you're using `master` like datasets (`doc-builder` tries to guess it, but in the job, we have weird checkout of merge commits so it doesn't always manage to get it right).
This PR makes sure it will always use master for the dev doc (until you decide to switchto main) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3893/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3893/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2576 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2576/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2576/comments | https://api.github.com/repos/huggingface/datasets/issues/2576/events | https://github.com/huggingface/datasets/pull/2576 | 934,986,761 | MDExOlB1bGxSZXF1ZXN0NjgxOTc5MTA1 | 2,576 | Add mC4 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-07-01T15:51:25Z" | "2021-07-02T14:50:56Z" | "2021-07-02T14:50:55Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2576.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2576",
"merged_at": "2021-07-02T14:50:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2576.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2576"
} | AllenAI is now hosting the processed C4 and mC4 dataset in this repo: https://huggingface.co/datasets/allenai/c4
Thanks a lot to them !
In this PR I added the mC4 dataset builder. It supports 108 languages
You can load it with
```python
from datasets import load_dataset
en_mc4 = load_dataset("mc4", "en")
fr_mc4 = load_dataset("mc4", "fr")
en_and_fr_mc4 = load_dataset("mc4", languages=["en", "fr"])
```
It also supports streaming, if you don't want to download hundreds of GB of data:
```python
en_mc4 = load_dataset("mc4", "en", streaming=True)
```
Regarding the dataset_infos.json, I will add them once I have them.
Also we can work on the dataset card at that will be at https://huggingface.co/datasets/mc4
For now I just added a link to https://huggingface.co/datasets/allenai/c4 as well as a few sections | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2576/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2576/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4878/comments | https://api.github.com/repos/huggingface/datasets/issues/4878/events | https://github.com/huggingface/datasets/issues/4878 | 1,348,270,141 | I_kwDODunzps5QXPg9 | 4,878 | [not really a bug] `identical_ok` is deprecated in huggingface-hub's `upload_file` | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"color": "008672",
"default": true,
"description": "Extra attention is needed",
"id": 1935892884,
"name": "help wanted",
"node_id": "MDU6TGFiZWwxOTM1ODkyODg0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted"
},
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | [] | null | [
"Resolved via https://github.com/huggingface/datasets/pull/4937."
] | "2022-08-23T17:09:55Z" | "2022-09-13T14:00:06Z" | "2022-09-13T14:00:05Z" | CONTRIBUTOR | null | null | null | In the huggingface-hub dependency, the `identical_ok` argument has no effect in `upload_file` (and it will be removed soon)
See
https://github.com/huggingface/huggingface_hub/blob/43499582b19df1ed081a5b2bd7a364e9cacdc91d/src/huggingface_hub/hf_api.py#L2164-L2169
It's used here:
https://github.com/huggingface/datasets/blob/fcfcc951a73efbc677f9def9a8707d0af93d5890/src/datasets/dataset_dict.py#L1373-L1381
https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4354-L4362
https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4197-L4213
We should remove it.
Maybe the third code sample has an unexpected behavior since it uses the non-default value `identical_ok = False`, but the argument is ignored. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4878/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4878/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4869/comments | https://api.github.com/repos/huggingface/datasets/issues/4869/events | https://github.com/huggingface/datasets/pull/4869 | 1,345,513,758 | PR_kwDODunzps49hBGY | 4,869 | Fix typos in documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/85993954?v=4",
"events_url": "https://api.github.com/users/fl-lo/events{/privacy}",
"followers_url": "https://api.github.com/users/fl-lo/followers",
"following_url": "https://api.github.com/users/fl-lo/following{/other_user}",
"gists_url": "https://api.github.com/users/fl-lo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fl-lo",
"id": 85993954,
"login": "fl-lo",
"node_id": "MDQ6VXNlcjg1OTkzOTU0",
"organizations_url": "https://api.github.com/users/fl-lo/orgs",
"received_events_url": "https://api.github.com/users/fl-lo/received_events",
"repos_url": "https://api.github.com/users/fl-lo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fl-lo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fl-lo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fl-lo"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-08-21T15:10:03Z" | "2022-08-22T09:25:39Z" | "2022-08-22T09:09:58Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4869.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4869",
"merged_at": "2022-08-22T09:09:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4869.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4869"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4869/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4869/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1014/comments | https://api.github.com/repos/huggingface/datasets/issues/1014/events | https://github.com/huggingface/datasets/pull/1014 | 755,505,851 | MDExOlB1bGxSZXF1ZXN0NTMxMjAzNzAz | 1,014 | Add SciTLDR Dataset (Take 2) | {
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"events_url": "https://api.github.com/users/bharatr21/events{/privacy}",
"followers_url": "https://api.github.com/users/bharatr21/followers",
"following_url": "https://api.github.com/users/bharatr21/following{/other_user}",
"gists_url": "https://api.github.com/users/bharatr21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bharatr21",
"id": 13381361,
"login": "bharatr21",
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"organizations_url": "https://api.github.com/users/bharatr21/orgs",
"received_events_url": "https://api.github.com/users/bharatr21/received_events",
"repos_url": "https://api.github.com/users/bharatr21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bharatr21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bharatr21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bharatr21"
} | [] | closed | false | null | [] | null | [
"@lhoestq please review this PR when you get free",
"If the CI fails just because of `RemoteDatasetTest` errors it's fine, they're fixed on master",
"> If the CI fails just because of `RemoteDatasetTest` errors it's fine, they're fixed on master\r\n\r\nThe same 3 tests are failing again :(\r\n```\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\n```",
"One trick if you want to add more datasets to avoid these errors : you can just rebase the master branch of your fork from the master branch of the repo. Then each time you make a new branch from master on your fork, it will include the fix for these errors",
"> One trick if you want to add more datasets to avoid these errors : you can just rebase the master branch of your fork from the master branch of the repo. Then each time you make a new branch from master on your fork, it will include the fix for these errors\r\n\r\nYes, I almost always do that, but somehow seems even this branch got old 😓 \r\nI also do the following if I directly create a new branch locally: `git checkout -b <branchname> upstream/master` so it stays up-to date irrespective of my fork, still don't know how this crept in again",
"Merging this one since the CI is fixed on master"
] | "2020-12-02T18:22:50Z" | "2020-12-02T18:55:10Z" | "2020-12-02T18:37:58Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1014.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1014",
"merged_at": "2020-12-02T18:37:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1014.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1014"
} | Adds the SciTLDR Dataset by AI2
Added the `README.md` card with tags to the best of my knowledge
Multi-target summaries or TLDRs of Scientific Documents
Continued from #986 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1014/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1014/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4388/comments | https://api.github.com/repos/huggingface/datasets/issues/4388/events | https://github.com/huggingface/datasets/pull/4388 | 1,244,645,158 | PR_kwDODunzps44RAG1 | 4,388 | Set builder name from module instead of class | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-05-23T06:26:35Z" | "2022-05-25T05:24:43Z" | "2022-05-25T05:16:15Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4388.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4388",
"merged_at": "2022-05-25T05:16:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4388.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4388"
} | Now the builder name attribute is set from from the builder class name.
This PR sets the builder name attribute from the module name instead. Some motivating reasons:
- The dataset ID is relevant and unique among all datasets and this is directly related to the repository name, i.e., the name of the directory containing the dataset
- The name of the module (i.e. the file containing the loading loading script) is already relevant for loading: it must have the same name as its containing directory (related to the dataset ID), as we search for it using its directory name
- On the other hand, the name of the builder class is not relevant for loading: in our code, we just search for a class which is subclass of `DatasetBuilder` (independently of its name). We do not put any constraint on the naming of the builder class and indeed it can have a name completely different from its module/direcotry/dataset_id
IMO it makes more sense to align the caching directory name with the dataset_id/directory/module name instead of the builder class name.
Fix #4381. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4388/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4388/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5697/comments | https://api.github.com/repos/huggingface/datasets/issues/5697/events | https://github.com/huggingface/datasets/pull/5697 | 1,651,812,614 | PR_kwDODunzps5NefxZ | 5,697 | Raise an error on missing distributed seed | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009644 / 0.011353 (-0.001709) | 0.006407 / 0.011008 (-0.004601) | 0.148353 / 0.038508 (0.109845) | 0.037537 / 0.023109 (0.014428) | 0.379697 / 0.275898 (0.103799) | 0.466260 / 0.323480 (0.142780) | 0.007884 / 0.007986 (-0.000102) | 0.005140 / 0.004328 (0.000812) | 0.111078 / 0.004250 (0.106827) | 0.049429 / 0.037052 (0.012377) | 0.364766 / 0.258489 (0.106277) | 0.453809 / 0.293841 (0.159968) | 0.051918 / 0.128546 (-0.076628) | 0.020081 / 0.075646 (-0.055566) | 0.616041 / 0.419271 (0.196770) | 0.059834 / 0.043533 (0.016301) | 0.373104 / 0.255139 (0.117965) | 0.419304 / 0.283200 (0.136104) | 0.113526 / 0.141683 (-0.028156) | 1.827160 / 1.452155 (0.375006) | 1.912092 / 1.492716 (0.419376) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269584 / 0.018006 (0.251578) | 0.554100 / 0.000490 (0.553610) | 0.006618 / 0.000200 (0.006418) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025280 / 0.037411 (-0.012131) | 0.123116 / 0.014526 (0.108591) | 0.127674 / 0.176557 (-0.048883) | 0.189106 / 0.737135 (-0.548030) | 0.142072 / 0.296338 (-0.154267) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602201 / 0.215209 (0.386992) | 5.959610 / 2.077655 (3.881956) | 2.404856 / 1.504120 (0.900736) | 2.175017 / 1.541195 (0.633823) | 2.154360 / 1.468490 (0.685870) | 1.265339 / 4.584777 (-3.319438) | 5.598429 / 3.745712 (1.852716) | 5.130249 / 5.269862 (-0.139612) | 2.764922 / 4.565676 (-1.800754) | 0.143232 / 0.424275 (-0.281043) | 0.014721 / 0.007607 (0.007114) | 0.764734 / 0.226044 (0.538689) | 7.518810 / 2.268929 (5.249882) | 3.344734 / 55.444624 (-52.099890) | 2.601158 / 6.876477 (-4.275319) | 2.726018 / 2.142072 (0.583945) | 1.397918 / 4.805227 (-3.407309) | 0.253277 / 6.500664 (-6.247387) | 0.077772 / 0.075469 (0.002303) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.499535 / 1.841788 (-0.342253) | 17.782490 / 8.074308 (9.708182) | 21.953064 / 10.191392 (11.761672) | 0.248753 / 0.680424 (-0.431671) | 0.029194 / 0.534201 (-0.505007) | 0.529700 / 0.579283 (-0.049583) | 0.618412 / 0.434364 (0.184048) | 0.605062 / 0.540337 (0.064725) | 0.725661 / 1.386936 (-0.661275) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009489 / 0.011353 (-0.001864) | 0.006423 / 0.011008 (-0.004585) | 0.096789 / 0.038508 (0.058281) | 0.034639 / 0.023109 (0.011530) | 0.403875 / 0.275898 (0.127977) | 0.439368 / 0.323480 (0.115888) | 0.006354 / 0.007986 (-0.001631) | 0.006794 / 0.004328 (0.002466) | 0.095537 / 0.004250 (0.091287) | 0.047749 / 0.037052 (0.010697) | 0.424157 / 0.258489 (0.165668) | 0.487825 / 0.293841 (0.193984) | 0.054675 / 0.128546 (-0.073872) | 0.021349 / 0.075646 (-0.054297) | 0.108917 / 0.419271 (-0.310354) | 0.075891 / 0.043533 (0.032358) | 0.412889 / 0.255139 (0.157750) | 0.464512 / 0.283200 (0.181312) | 0.118832 / 0.141683 (-0.022850) | 1.721215 / 1.452155 (0.269060) | 1.857195 / 1.492716 (0.364478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248308 / 0.018006 (0.230302) | 0.559496 / 0.000490 (0.559006) | 0.007136 / 0.000200 (0.006936) | 0.000160 / 0.000054 (0.000106) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031772 / 0.037411 (-0.005639) | 0.123565 / 0.014526 (0.109039) | 0.132660 / 0.176557 (-0.043896) | 0.201428 / 0.737135 (-0.535707) | 0.135238 / 0.296338 (-0.161101) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.646978 / 0.215209 (0.431769) | 6.183477 / 2.077655 (4.105822) | 2.782117 / 1.504120 (1.277997) | 2.294093 / 1.541195 (0.752898) | 2.346932 / 1.468490 (0.878442) | 1.239085 / 4.584777 (-3.345692) | 5.696364 / 3.745712 (1.950652) | 4.980102 / 5.269862 (-0.289759) | 2.278116 / 4.565676 (-2.287560) | 0.157339 / 0.424275 (-0.266936) | 0.014936 / 0.007607 (0.007329) | 0.778001 / 0.226044 (0.551957) | 7.708066 / 2.268929 (5.439138) | 3.412235 / 55.444624 (-52.032389) | 2.670670 / 6.876477 (-4.205806) | 2.731802 / 2.142072 (0.589730) | 1.446516 / 4.805227 (-3.358712) | 0.263689 / 6.500664 (-6.236975) | 0.086359 / 0.075469 (0.010890) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.573169 / 1.841788 (-0.268619) | 17.690842 / 8.074308 (9.616534) | 20.343336 / 10.191392 (10.151944) | 0.231028 / 0.680424 (-0.449396) | 0.025954 / 0.534201 (-0.508247) | 0.570554 / 0.579283 (-0.008729) | 0.610453 / 0.434364 (0.176089) | 0.675830 / 0.540337 (0.135493) | 0.790650 / 1.386936 (-0.596286) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d094ed07823bfb3271f3a9006daa1f92a64967a5 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007553 / 0.011353 (-0.003800) | 0.005426 / 0.011008 (-0.005582) | 0.096550 / 0.038508 (0.058042) | 0.034393 / 0.023109 (0.011284) | 0.322297 / 0.275898 (0.046399) | 0.340943 / 0.323480 (0.017463) | 0.006350 / 0.007986 (-0.001635) | 0.005700 / 0.004328 (0.001372) | 0.074929 / 0.004250 (0.070678) | 0.054819 / 0.037052 (0.017767) | 0.320151 / 0.258489 (0.061662) | 0.346957 / 0.293841 (0.053116) | 0.036659 / 0.128546 (-0.091887) | 0.012443 / 0.075646 (-0.063204) | 0.332232 / 0.419271 (-0.087040) | 0.051467 / 0.043533 (0.007934) | 0.310952 / 0.255139 (0.055813) | 0.325617 / 0.283200 (0.042417) | 0.104908 / 0.141683 (-0.036775) | 1.446752 / 1.452155 (-0.005403) | 1.558773 / 1.492716 (0.066056) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300639 / 0.018006 (0.282633) | 0.499901 / 0.000490 (0.499411) | 0.007340 / 0.000200 (0.007140) | 0.000255 / 0.000054 (0.000201) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027206 / 0.037411 (-0.010206) | 0.105603 / 0.014526 (0.091077) | 0.118669 / 0.176557 (-0.057887) | 0.174050 / 0.737135 (-0.563086) | 0.125099 / 0.296338 (-0.171239) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404285 / 0.215209 (0.189076) | 4.034587 / 2.077655 (1.956933) | 1.812639 / 1.504120 (0.308519) | 1.625745 / 1.541195 (0.084551) | 1.735523 / 1.468490 (0.267033) | 0.709699 / 4.584777 (-3.875078) | 3.802196 / 3.745712 (0.056484) | 3.656984 / 5.269862 (-1.612877) | 1.968470 / 4.565676 (-2.597206) | 0.086612 / 0.424275 (-0.337663) | 0.012368 / 0.007607 (0.004761) | 0.502622 / 0.226044 (0.276577) | 5.017876 / 2.268929 (2.748948) | 2.279794 / 55.444624 (-53.164831) | 1.956938 / 6.876477 (-4.919538) | 2.150430 / 2.142072 (0.008357) | 0.847691 / 4.805227 (-3.957536) | 0.170157 / 6.500664 (-6.330507) | 0.064141 / 0.075469 (-0.011328) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172246 / 1.841788 (-0.669542) | 15.229444 / 8.074308 (7.155136) | 14.715913 / 10.191392 (4.524521) | 0.192501 / 0.680424 (-0.487923) | 0.017972 / 0.534201 (-0.516229) | 0.423834 / 0.579283 (-0.155449) | 0.423019 / 0.434364 (-0.011345) | 0.493298 / 0.540337 (-0.047039) | 0.589833 / 1.386936 (-0.797103) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007773 / 0.011353 (-0.003580) | 0.005449 / 0.011008 (-0.005560) | 0.075180 / 0.038508 (0.036672) | 0.035221 / 0.023109 (0.012111) | 0.338169 / 0.275898 (0.062271) | 0.374002 / 0.323480 (0.050522) | 0.006391 / 0.007986 (-0.001595) | 0.004406 / 0.004328 (0.000078) | 0.074925 / 0.004250 (0.070675) | 0.056527 / 0.037052 (0.019475) | 0.338071 / 0.258489 (0.079582) | 0.391882 / 0.293841 (0.098041) | 0.037241 / 0.128546 (-0.091305) | 0.012546 / 0.075646 (-0.063100) | 0.087331 / 0.419271 (-0.331940) | 0.049851 / 0.043533 (0.006318) | 0.335264 / 0.255139 (0.080125) | 0.354813 / 0.283200 (0.071614) | 0.110614 / 0.141683 (-0.031069) | 1.432782 / 1.452155 (-0.019372) | 1.548800 / 1.492716 (0.056083) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.307892 / 0.018006 (0.289886) | 0.518809 / 0.000490 (0.518319) | 0.004058 / 0.000200 (0.003858) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029155 / 0.037411 (-0.008256) | 0.111706 / 0.014526 (0.097180) | 0.122964 / 0.176557 (-0.053592) | 0.170939 / 0.737135 (-0.566196) | 0.128538 / 0.296338 (-0.167801) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426529 / 0.215209 (0.211320) | 4.254218 / 2.077655 (2.176563) | 2.011455 / 1.504120 (0.507335) | 1.817397 / 1.541195 (0.276202) | 1.952915 / 1.468490 (0.484425) | 0.705052 / 4.584777 (-3.879725) | 3.844458 / 3.745712 (0.098746) | 3.592754 / 5.269862 (-1.677107) | 1.573567 / 4.565676 (-2.992109) | 0.086834 / 0.424275 (-0.337441) | 0.012389 / 0.007607 (0.004782) | 0.541695 / 0.226044 (0.315650) | 5.224492 / 2.268929 (2.955564) | 2.473648 / 55.444624 (-52.970976) | 2.167458 / 6.876477 (-4.709019) | 2.253319 / 2.142072 (0.111246) | 0.836322 / 4.805227 (-3.968905) | 0.168680 / 6.500664 (-6.331984) | 0.065699 / 0.075469 (-0.009770) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281886 / 1.841788 (-0.559902) | 15.451741 / 8.074308 (7.377433) | 14.906870 / 10.191392 (4.715478) | 0.168554 / 0.680424 (-0.511870) | 0.017365 / 0.534201 (-0.516836) | 0.434183 / 0.579283 (-0.145100) | 0.421891 / 0.434364 (-0.012473) | 0.538993 / 0.540337 (-0.001344) | 0.636212 / 1.386936 (-0.750724) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1f428b8172319a6bfe95d7a4356b1d14a8d386d8 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007362 / 0.011353 (-0.003991) | 0.004992 / 0.011008 (-0.006016) | 0.098730 / 0.038508 (0.060222) | 0.033673 / 0.023109 (0.010563) | 0.296334 / 0.275898 (0.020436) | 0.328208 / 0.323480 (0.004728) | 0.005658 / 0.007986 (-0.002327) | 0.004130 / 0.004328 (-0.000199) | 0.074596 / 0.004250 (0.070346) | 0.048230 / 0.037052 (0.011178) | 0.295631 / 0.258489 (0.037142) | 0.347176 / 0.293841 (0.053335) | 0.036359 / 0.128546 (-0.092187) | 0.011889 / 0.075646 (-0.063758) | 0.332889 / 0.419271 (-0.086382) | 0.049708 / 0.043533 (0.006175) | 0.291207 / 0.255139 (0.036068) | 0.311066 / 0.283200 (0.027867) | 0.098418 / 0.141683 (-0.043265) | 1.415450 / 1.452155 (-0.036705) | 1.526928 / 1.492716 (0.034212) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212636 / 0.018006 (0.194630) | 0.432337 / 0.000490 (0.431847) | 0.006839 / 0.000200 (0.006639) | 0.000205 / 0.000054 (0.000150) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026045 / 0.037411 (-0.011366) | 0.107427 / 0.014526 (0.092901) | 0.114634 / 0.176557 (-0.061922) | 0.169943 / 0.737135 (-0.567192) | 0.123290 / 0.296338 (-0.173048) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409432 / 0.215209 (0.194223) | 4.097910 / 2.077655 (2.020255) | 1.857177 / 1.504120 (0.353057) | 1.672355 / 1.541195 (0.131160) | 1.740130 / 1.468490 (0.271640) | 0.706520 / 4.584777 (-3.878257) | 3.773606 / 3.745712 (0.027893) | 2.101635 / 5.269862 (-3.168226) | 1.326295 / 4.565676 (-3.239382) | 0.085672 / 0.424275 (-0.338604) | 0.012142 / 0.007607 (0.004534) | 0.501168 / 0.226044 (0.275123) | 5.049784 / 2.268929 (2.780855) | 2.322477 / 55.444624 (-53.122148) | 1.990105 / 6.876477 (-4.886372) | 2.115003 / 2.142072 (-0.027070) | 0.837518 / 4.805227 (-3.967709) | 0.168457 / 6.500664 (-6.332207) | 0.064622 / 0.075469 (-0.010847) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188152 / 1.841788 (-0.653635) | 14.991585 / 8.074308 (6.917276) | 14.635187 / 10.191392 (4.443795) | 0.183708 / 0.680424 (-0.496716) | 0.017452 / 0.534201 (-0.516749) | 0.418963 / 0.579283 (-0.160320) | 0.428893 / 0.434364 (-0.005471) | 0.502108 / 0.540337 (-0.038229) | 0.596345 / 1.386936 (-0.790591) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007404 / 0.011353 (-0.003949) | 0.005148 / 0.011008 (-0.005860) | 0.074785 / 0.038508 (0.036277) | 0.033815 / 0.023109 (0.010706) | 0.332752 / 0.275898 (0.056854) | 0.368018 / 0.323480 (0.044538) | 0.005642 / 0.007986 (-0.002344) | 0.004041 / 0.004328 (-0.000287) | 0.073455 / 0.004250 (0.069205) | 0.047380 / 0.037052 (0.010328) | 0.337017 / 0.258489 (0.078528) | 0.384185 / 0.293841 (0.090344) | 0.036592 / 0.128546 (-0.091954) | 0.012109 / 0.075646 (-0.063537) | 0.086862 / 0.419271 (-0.332410) | 0.049030 / 0.043533 (0.005497) | 0.336542 / 0.255139 (0.081403) | 0.350295 / 0.283200 (0.067096) | 0.100998 / 0.141683 (-0.040685) | 1.469749 / 1.452155 (0.017594) | 1.588355 / 1.492716 (0.095639) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227552 / 0.018006 (0.209546) | 0.438087 / 0.000490 (0.437598) | 0.000394 / 0.000200 (0.000194) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030575 / 0.037411 (-0.006836) | 0.111914 / 0.014526 (0.097388) | 0.124583 / 0.176557 (-0.051973) | 0.175471 / 0.737135 (-0.561665) | 0.129535 / 0.296338 (-0.166803) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425625 / 0.215209 (0.210416) | 4.228328 / 2.077655 (2.150673) | 2.021087 / 1.504120 (0.516967) | 1.832550 / 1.541195 (0.291355) | 1.925572 / 1.468490 (0.457082) | 0.690772 / 4.584777 (-3.894005) | 3.724900 / 3.745712 (-0.020813) | 2.080286 / 5.269862 (-3.189576) | 1.316854 / 4.565676 (-3.248822) | 0.085123 / 0.424275 (-0.339152) | 0.012078 / 0.007607 (0.004471) | 0.525802 / 0.226044 (0.299758) | 5.242598 / 2.268929 (2.973670) | 2.491596 / 55.444624 (-52.953028) | 2.125156 / 6.876477 (-4.751320) | 2.185922 / 2.142072 (0.043850) | 0.823116 / 4.805227 (-3.982111) | 0.165188 / 6.500664 (-6.335476) | 0.063970 / 0.075469 (-0.011499) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256948 / 1.841788 (-0.584840) | 14.981990 / 8.074308 (6.907682) | 14.565266 / 10.191392 (4.373874) | 0.175064 / 0.680424 (-0.505360) | 0.017628 / 0.534201 (-0.516573) | 0.429979 / 0.579283 (-0.149304) | 0.422509 / 0.434364 (-0.011855) | 0.546262 / 0.540337 (0.005924) | 0.647103 / 1.386936 (-0.739833) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0803a006db1c395ac715662cc6079651f77c11ea \"CML watermark\")\n"
] | "2023-04-03T10:44:58Z" | "2023-04-04T15:05:24Z" | "2023-04-04T14:58:16Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5697.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5697",
"merged_at": "2023-04-04T14:58:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5697.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5697"
} | close https://github.com/huggingface/datasets/issues/5696 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5697/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5697/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3920/comments | https://api.github.com/repos/huggingface/datasets/issues/3920/events | https://github.com/huggingface/datasets/issues/3920 | 1,169,532,807 | I_kwDODunzps5FtaeH | 3,920 | 'datasets.features' is not a package | {
"avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4",
"events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}",
"followers_url": "https://api.github.com/users/Arij-Aladel/followers",
"following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}",
"gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Arij-Aladel",
"id": 68355048,
"login": "Arij-Aladel",
"node_id": "MDQ6VXNlcjY4MzU1MDQ4",
"organizations_url": "https://api.github.com/users/Arij-Aladel/orgs",
"received_events_url": "https://api.github.com/users/Arij-Aladel/received_events",
"repos_url": "https://api.github.com/users/Arij-Aladel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Arij-Aladel"
} | [] | closed | false | null | [] | null | [
"Hi @Arij-Aladel,\r\n\r\nYou are using a very old version of our library `datasets`: 1.8.0\r\nCurrent version is 2.0.0 (and the previous one was 1.18.4)\r\n\r\nPlease, try to update `datasets` library and check if the problem persists:\r\n```shell\r\n/env/bin/pip install -U datasets",
"The problem I can no I have build my project on this version and old version on transformers. I have preprocessed the data again to use it. Thank for your reply"
] | "2022-03-15T11:14:23Z" | "2022-03-16T09:17:12Z" | "2022-03-16T09:17:12Z" | NONE | null | null | null | @albertvillanova
python 3.9
os: ubuntu 20.04
In conda environment
torch installed by
```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html```
datasets package is installed by
```
/env/bin/pip install datasets==1.8.0
```
During runing the code I have this error
```
[6]<stderr>: File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
[6]<stderr>: return super().find_class(mod_name, name)
[6]<stderr>:ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package
```
precisely this error appears when
torch.load('data_file.pt')
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package
```
Why I am getting this error?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3920/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3920/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2997/comments | https://api.github.com/repos/huggingface/datasets/issues/2997/events | https://github.com/huggingface/datasets/issues/2997 | 1,013,270,069 | I_kwDODunzps48ZUY1 | 2,997 | Dataset has incorrect labels | {
"avatar_url": "https://avatars.githubusercontent.com/u/63367770?v=4",
"events_url": "https://api.github.com/users/marshmellow77/events{/privacy}",
"followers_url": "https://api.github.com/users/marshmellow77/followers",
"following_url": "https://api.github.com/users/marshmellow77/following{/other_user}",
"gists_url": "https://api.github.com/users/marshmellow77/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marshmellow77",
"id": 63367770,
"login": "marshmellow77",
"node_id": "MDQ6VXNlcjYzMzY3Nzcw",
"organizations_url": "https://api.github.com/users/marshmellow77/orgs",
"received_events_url": "https://api.github.com/users/marshmellow77/received_events",
"repos_url": "https://api.github.com/users/marshmellow77/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marshmellow77/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marshmellow77/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marshmellow77"
} | [] | closed | false | null | [] | null | [
"Hi @marshmellow77, thanks for reporting.\r\n\r\nThat issue is fixed since `datasets` version 1.9.0 (see 16bc665f2753677c765011ef79c84e55486d4347).\r\n\r\nPlease, update `datasets` with: `pip install -U datasets`",
"Thanks. Please note that the dataset explorer (https://huggingface.co/datasets/viewer/?dataset=turkish_product_reviews) still shows the incorrect state. The sentiment for the first few customer reviews is actually negative and should be labelled with \"0\", see screenshot:\r\n\r\n![Capture1](https://user-images.githubusercontent.com/63367770/135637150-93d9b09b-f1dd-4701-97a5-5cb2672ec0c7.PNG)\r\n\r\n\r\n",
"Thanks @marshmellow77, good catch! I'm transferring this issue to https://github.com/huggingface/datasets-viewer. "
] | "2021-10-01T12:09:06Z" | "2021-10-01T15:32:00Z" | "2021-10-01T13:54:34Z" | NONE | null | null | null | The dataset https://huggingface.co/datasets/turkish_product_reviews has incorrect labels - all reviews are labelled with "1" (positive sentiment). None of the reviews is labelled with "0". See screenshot attached:
![Capture](https://user-images.githubusercontent.com/63367770/135617428-14ce0b27-5208-4e66-a3ee-71542e3257b4.PNG)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2997/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2997/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3259/comments | https://api.github.com/repos/huggingface/datasets/issues/3259/events | https://github.com/huggingface/datasets/pull/3259 | 1,052,189,775 | PR_kwDODunzps4ud5W3 | 3,259 | Updating details of IRC disentanglement data | {
"avatar_url": "https://avatars.githubusercontent.com/u/1298052?v=4",
"events_url": "https://api.github.com/users/jkkummerfeld/events{/privacy}",
"followers_url": "https://api.github.com/users/jkkummerfeld/followers",
"following_url": "https://api.github.com/users/jkkummerfeld/following{/other_user}",
"gists_url": "https://api.github.com/users/jkkummerfeld/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jkkummerfeld",
"id": 1298052,
"login": "jkkummerfeld",
"node_id": "MDQ6VXNlcjEyOTgwNTI=",
"organizations_url": "https://api.github.com/users/jkkummerfeld/orgs",
"received_events_url": "https://api.github.com/users/jkkummerfeld/received_events",
"repos_url": "https://api.github.com/users/jkkummerfeld/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jkkummerfeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jkkummerfeld/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jkkummerfeld"
} | [] | closed | false | null | [] | null | [
"Thank you for the cleanup!"
] | "2021-11-12T17:16:58Z" | "2021-11-18T17:19:33Z" | "2021-11-18T17:19:33Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3259.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3259",
"merged_at": "2021-11-18T17:19:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3259.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3259"
} | I was pleasantly surprised to find that someone had already added my dataset to the huggingface library, but some details were missing or incorrect. This PR fixes the documentation. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3259/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3259/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1219/comments | https://api.github.com/repos/huggingface/datasets/issues/1219/events | https://github.com/huggingface/datasets/pull/1219 | 758,013,368 | MDExOlB1bGxSZXF1ZXN0NTMzMjU5NzMw | 1,219 | Add Korean NER dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jaketae",
"id": 25360440,
"login": "jaketae",
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"repos_url": "https://api.github.com/users/jaketae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jaketae"
} | [] | closed | false | null | [] | null | [] | "2020-12-06T20:19:06Z" | "2021-12-29T00:50:59Z" | "2020-12-08T10:25:33Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1219.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1219",
"merged_at": "2020-12-08T10:25:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1219.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1219"
} | Supersedes #1177
> This PR adds the [Korean named entity recognition dataset](https://github.com/kmounlp/NER). This dataset has been used in many downstream tasks, such as training [KoBERT](https://github.com/SKTBrain/KoBERT) for NER, as seen in this [KoBERT-CRF implementation](https://github.com/eagle705/pytorch-bert-crf-ner). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1219/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1219/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1075/comments | https://api.github.com/repos/huggingface/datasets/issues/1075/events | https://github.com/huggingface/datasets/pull/1075 | 756,501,235 | MDExOlB1bGxSZXF1ZXN0NTMyMDM4ODg1 | 1,075 | adding cleaned verion of E2E NLG | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [] | "2020-12-03T19:21:07Z" | "2020-12-03T19:43:56Z" | "2020-12-03T19:43:56Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1075.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1075",
"merged_at": "2020-12-03T19:43:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1075.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1075"
} | Found at: https://github.com/tuetschek/e2e-cleaning | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1075/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1075/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2971 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2971/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2971/comments | https://api.github.com/repos/huggingface/datasets/issues/2971/events | https://github.com/huggingface/datasets/issues/2971 | 1,007,696,522 | I_kwDODunzps48EDqK | 2,971 | masakhaner dataset load problem | {
"avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4",
"events_url": "https://api.github.com/users/ontocord/events{/privacy}",
"followers_url": "https://api.github.com/users/ontocord/followers",
"following_url": "https://api.github.com/users/ontocord/following{/other_user}",
"gists_url": "https://api.github.com/users/ontocord/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ontocord",
"id": 8900094,
"login": "ontocord",
"node_id": "MDQ6VXNlcjg5MDAwOTQ=",
"organizations_url": "https://api.github.com/users/ontocord/orgs",
"received_events_url": "https://api.github.com/users/ontocord/received_events",
"repos_url": "https://api.github.com/users/ontocord/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ontocord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ontocord/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ontocord"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Thanks for reporting, @ontocord. We are fixing the wrong metadata."
] | "2021-09-27T04:59:07Z" | "2021-09-27T12:59:59Z" | "2021-09-27T12:59:59Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
Masakhaner dataset is not loading
## Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("masakhaner",'amh')
```
## Expected results
Expected the return of a dataset
## Actual results
```
NonMatchingSplitsSizesError Traceback (most recent call last)
<ipython-input-3-a6abc1161d4c> in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("masakhaner",'amh')
3 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py
in verify_splits(expected_splits, recorded_splits)
72 ]
73 if len(bad_splits) > 0:
---> 74 raise NonMatchingSplitsSizesError(str(bad_splits))
75 logger.info("All the splits matched successfully.")
76
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=639927, num_examples=1751, dataset_name='masakhaner'), 'recorded': SplitInfo(name='train', num_bytes=639911, num_examples=1750, dataset_name='masakhaner')}, {'expected': SplitInfo(name='validation', num_bytes=92768, num_examples=251, dataset_name='masakhaner'), 'recorded': SplitInfo(name='validation', num_bytes=92753, num_examples=250, dataset_name='masakhaner')}, {'expected': SplitInfo(name='test', num_bytes=184286, num_examples=501, dataset_name='masakhaner'), 'recorded': SplitInfo(name='test', num_bytes=184271, num_examples=500, dataset_name='masakhaner')}]
```
## Environment info
Google Colab
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2971/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2971/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4920/comments | https://api.github.com/repos/huggingface/datasets/issues/4920/events | https://github.com/huggingface/datasets/issues/4920 | 1,357,564,589 | I_kwDODunzps5Q6sqt | 4,920 | Unable to load local tsv files through load_dataset method | {
"avatar_url": "https://avatars.githubusercontent.com/u/44038517?v=4",
"events_url": "https://api.github.com/users/DataNoob0723/events{/privacy}",
"followers_url": "https://api.github.com/users/DataNoob0723/followers",
"following_url": "https://api.github.com/users/DataNoob0723/following{/other_user}",
"gists_url": "https://api.github.com/users/DataNoob0723/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DataNoob0723",
"id": 44038517,
"login": "DataNoob0723",
"node_id": "MDQ6VXNlcjQ0MDM4NTE3",
"organizations_url": "https://api.github.com/users/DataNoob0723/orgs",
"received_events_url": "https://api.github.com/users/DataNoob0723/received_events",
"repos_url": "https://api.github.com/users/DataNoob0723/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DataNoob0723/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DataNoob0723/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DataNoob0723"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
}
] | null | [
"Hi @DataNoob0723,\r\n\r\nUnder the hood, we use `pandas` to load CSV/TSV files. Therefore, you should use \"csv\" and pass `sep=\"\\t\"`, as explained in our docs: https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/loading_methods#from-files\r\n```python\r\nds = load_dataset('csv', sep=\"\\t\", data_files=data_files)\r\n``` "
] | "2022-08-31T16:13:39Z" | "2022-09-01T05:31:30Z" | "2022-09-01T05:31:30Z" | NONE | null | null | null | ## Describe the bug
Unable to load local tsv files through load_dataset method.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
data_files = {
'train': 'train.tsv',
'test': 'test.tsv'
}
raw_datasets = load_dataset('tsv', data_files=data_files)
## Expected results
I am pretty sure the data files exist in the current directory. The above code should load them as Datasets, but threw exceptions.
## Actual results
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-9-24207899c1af>](https://localhost:8080/#) in <module>
----> 1 raw_datasets = load_dataset('tsv', data_files='train.tsv')
2 frames
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1244 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1245 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1246 ) from None
1247 raise e1 from None
1248 else:
FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/main/datasets/tsv/tsv.py
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4920/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4920/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1965/comments | https://api.github.com/repos/huggingface/datasets/issues/1965/events | https://github.com/huggingface/datasets/issues/1965 | 818,833,460 | MDU6SXNzdWU4MTg4MzM0NjA= | 1,965 | Can we parallelized the add_faiss_index process over dataset shards ? | {
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shamanez",
"id": 16892570,
"login": "shamanez",
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"repos_url": "https://api.github.com/users/shamanez/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shamanez"
} | [] | closed | false | null | [] | null | [
"Hi !\r\nAs far as I know not all faiss indexes can be computed in parallel and then merged. \r\nFor example [here](https://github.com/facebookresearch/faiss/wiki/Special-operations-on-indexes#splitting-and-merging-indexes) is is mentioned that only IndexIVF indexes can be merged.\r\nMoreover faiss already works using multithreading to parallelize the workload over your different CPU cores. You can find more info [here](https://github.com/facebookresearch/faiss/wiki/Threads-and-asynchronous-calls#internal-threading)\r\nSo I feel like the gains we would get by implementing a parallel `add_faiss_index` would not be that important, but let me know what you think.\r\n",
"Actually, you are right. I also had the same idea. I am trying this in the context of end-ton-end retrieval training in RAG. So far I have parallelized the embedding re-computation within the training loop by using datasets shards. \r\n\r\nThen I was thinking of can I calculate the indexes for each shard and combined them with **concatenate** before I save.",
"@lhoestq As you mentioned faiss is already using multiprocessing. I tried to do the add_index with faiss for a dataset object inside a RAY actor and the process became very slow... if fact it takes so much time. It is because a ray actor comes with a single CPU core unless we assign it more. I also tried assigning more cores but still running add_index in the main process is very fast. "
] | "2021-03-01T12:47:34Z" | "2021-03-04T19:40:56Z" | "2021-03-04T19:40:42Z" | NONE | null | null | null | I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ?
I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process.
@lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1965/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1965/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3606 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3606/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3606/comments | https://api.github.com/repos/huggingface/datasets/issues/3606/events | https://github.com/huggingface/datasets/issues/3606 | 1,108,918,701 | I_kwDODunzps5CGMGt | 3,606 | audio column not saved correctly after resampling | {
"avatar_url": "https://avatars.githubusercontent.com/u/24724502?v=4",
"events_url": "https://api.github.com/users/laphang/events{/privacy}",
"followers_url": "https://api.github.com/users/laphang/followers",
"following_url": "https://api.github.com/users/laphang/following{/other_user}",
"gists_url": "https://api.github.com/users/laphang/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/laphang",
"id": 24724502,
"login": "laphang",
"node_id": "MDQ6VXNlcjI0NzI0NTAy",
"organizations_url": "https://api.github.com/users/laphang/orgs",
"received_events_url": "https://api.github.com/users/laphang/received_events",
"repos_url": "https://api.github.com/users/laphang/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/laphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laphang/subscriptions",
"type": "User",
"url": "https://api.github.com/users/laphang"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi ! We just released a new version of `datasets` that should fix this.\r\n\r\nI tested resampling and using save/load_from_disk afterwards and it seems to be fixed now",
"Hi @lhoestq, \r\n\r\nJust tested the latest datasets version, and confirming that this is fixed for me. \r\n\r\nThanks!",
"Also, just an FYI, data that I had saved (with save_to_disk) previously from common voice using datasets==1.17.0 now give the error below when loading (with load_from disk) using datasets==1.18.0. \r\n\r\nHowever, when starting fresh using load_dataset, then doing the resampling, the save/load_from disk worked fine. \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<timed exec> in <module>\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1747 return Dataset.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1748 elif fs.isfile(Path(dest_dataset_path, config.DATASETDICT_JSON_FILENAME).as_posix()):\r\n-> 1749 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)\r\n 1750 else:\r\n 1751 raise FileNotFoundError(\r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in load_from_disk(dataset_dict_path, fs, keep_in_memory)\r\n 769 else Path(dest_dataset_dict_path, k).as_posix()\r\n 770 )\r\n--> 771 dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)\r\n 772 return dataset_dict\r\n 773 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in load_from_disk(dataset_path, fs, keep_in_memory)\r\n 1118 info=dataset_info,\r\n 1119 split=split,\r\n-> 1120 fingerprint=state[\"_fingerprint\"],\r\n 1121 )\r\n 1122 \r\n\r\n/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)\r\n 655 if self.info.features.type != inferred_features.type:\r\n 656 raise ValueError(\r\n--> 657 f\"External features info don't match the dataset:\\nGot\\n{self.info.features}\\nwith type\\n{self.info.features.type}\\n\\nbut expected something like\\n{inferred_features}\\nwith type\\n{inferred_features.type}\"\r\n 658 )\r\n 659 \r\n\r\nValueError: External features info don't match the dataset:\r\nGot\r\n{'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=48000, mono=True, id=None), 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)}\r\nwith type\r\nstruct<accent: string, age: string, audio: struct<bytes: binary, path: string>, client_id: string, down_votes: int64, gender: string, locale: string, path: string, segment: string, sentence: string, up_votes: int64>\r\n\r\nbut expected something like\r\n{'accent': Value(dtype='string', id=None), 'age': Value(dtype='string', id=None), 'audio': {'path': Value(dtype='string', id=None), 'bytes': Value(dtype='binary', id=None)}, 'client_id': Value(dtype='string', id=None), 'down_votes': Value(dtype='int64', id=None), 'gender': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None)}\r\nwith type\r\nstruct<accent: string, age: string, audio: struct<path: string, bytes: binary>, client_id: string, down_votes: int64, gender: string, locale: string, path: string, segment: string, sentence: string, up_votes: int64> \r\n```"
] | "2022-01-20T06:37:10Z" | "2022-01-23T01:41:01Z" | "2022-01-23T01:24:14Z" | NONE | null | null | null | ## Describe the bug
After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type.
## Steps to reproduce the bug
- load a subset of common voice dataset (48Khz)
- resample audio column to 16Khz
- save with save_to_disk()
- load with load_from_disk()
## Expected results
I expected that after saving the data, and then loading it back in, the audio column has the correct dataset.Audio type (i.e. same as before saving it)
{'accent': Value(dtype='string', id=None),
'age': Value(dtype='string', id=None),
'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None),
'client_id': Value(dtype='string', id=None),
'down_votes': Value(dtype='int64', id=None),
'gender': Value(dtype='string', id=None),
'locale': Value(dtype='string', id=None),
'path': Value(dtype='string', id=None),
'segment': Value(dtype='string', id=None),
'sentence': Value(dtype='string', id=None),
'up_votes': Value(dtype='int64', id=None)}
## Actual results
Audio column does not have the right type
{'accent': Value(dtype='string', id=None),
'age': Value(dtype='string', id=None),
'audio': {'bytes': Value(dtype='binary', id=None),
'path': Value(dtype='string', id=None)},
'client_id': Value(dtype='string', id=None),
'down_votes': Value(dtype='int64', id=None),
'gender': Value(dtype='string', id=None),
'locale': Value(dtype='string', id=None),
'path': Value(dtype='string', id=None),
'segment': Value(dtype='string', id=None),
'sentence': Value(dtype='string', id=None),
'up_votes': Value(dtype='int64', id=None)}
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: linux
- Python version:
- PyArrow version:
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3606/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3606/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4833 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4833/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4833/comments | https://api.github.com/repos/huggingface/datasets/issues/4833/events | https://github.com/huggingface/datasets/pull/4833 | 1,336,946,965 | PR_kwDODunzps49E_Nk | 4,833 | Fix missing tags in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-08-12T09:04:52Z" | "2022-09-22T14:41:23Z" | "2022-08-12T09:45:55Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4833.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4833",
"merged_at": "2022-08-12T09:45:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4833.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4833"
} | Fix missing tags in dataset cards:
- boolq
- break_data
- definite_pronoun_resolution
- emo
- kor_nli
- pg19
- quartz
- sciq
- squad_es
- wmt14
- wmt15
- wmt16
- wmt17
- wmt18
- wmt19
- wmt_t2t
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4833/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4833/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6153/comments | https://api.github.com/repos/huggingface/datasets/issues/6153/events | https://github.com/huggingface/datasets/issues/6153 | 1,852,630,074 | I_kwDODunzps5ubOQ6 | 6,153 | custom load dataset to hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andysingal",
"id": 20493493,
"login": "andysingal",
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"repos_url": "https://api.github.com/users/andysingal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andysingal"
} | [] | closed | false | null | [] | null | [
"This is an issue for the [Datasets repo](https://github.com/huggingface/datasets).",
"> This is an issue for the [Datasets repo](https://github.com/huggingface/datasets).\r\n\r\nThanks @sgugger , I guess I will wait for them to address the issue . Looking forward to hearing from them ",
"You can use `.push_to_hub(\"<username>/<repo>\")` to push a `Dataset` to the Hub.",
"> You can use `.push_to_hub(\"<username>/<repo>\")` to push a `Dataset` to the Hub.\r\n\r\nhow about subset? like `.load_dataset(\"<username>/<repo>\", \"<subset>\")`, how can I upload multi-dataset in one repo? thanks a lot ! ",
"> > You can use `.push_to_hub(\"<username>/<repo>\")` to push a `Dataset` to the Hub.\r\n> \r\n> how about subset? like `.load_dataset(\"<username>/<repo>\", \"<subset>\")`, how can I upload multi-dataset in one repo? thanks a lot !\r\n\r\nI solved it by upgrading `Datasets` version to 2.15.0"
] | "2023-08-13T04:42:22Z" | "2023-11-21T11:50:28Z" | "2023-10-08T17:04:16Z" | NONE | null | null | null | ### System Info
kaggle notebook
i transformed dataset:
```
dataset = load_dataset("Dahoas/first-instruct-human-assistant-prompt")
```
to
formatted_dataset:
```
Dataset({
features: ['message_tree_id', 'message_tree_text'],
num_rows: 33143
})
```
but would like to know how to upload to hub
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
shared above
### Expected behavior
load dataset to hub | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6153/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6153/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4421/comments | https://api.github.com/repos/huggingface/datasets/issues/4421/events | https://github.com/huggingface/datasets/pull/4421 | 1,253,059,467 | PR_kwDODunzps44szxR | 4,421 | Add extractor for bzip2-compressed files | {
"avatar_url": "https://avatars.githubusercontent.com/u/2910707?v=4",
"events_url": "https://api.github.com/users/osyvokon/events{/privacy}",
"followers_url": "https://api.github.com/users/osyvokon/followers",
"following_url": "https://api.github.com/users/osyvokon/following{/other_user}",
"gists_url": "https://api.github.com/users/osyvokon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/osyvokon",
"id": 2910707,
"login": "osyvokon",
"node_id": "MDQ6VXNlcjI5MTA3MDc=",
"organizations_url": "https://api.github.com/users/osyvokon/orgs",
"received_events_url": "https://api.github.com/users/osyvokon/received_events",
"repos_url": "https://api.github.com/users/osyvokon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/osyvokon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osyvokon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/osyvokon"
} | [] | closed | false | null | [] | null | [] | "2022-05-30T19:19:40Z" | "2022-06-06T15:22:50Z" | "2022-06-06T15:22:50Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4421.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4421",
"merged_at": "2022-06-06T15:22:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4421.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4421"
} | This change enables loading bzipped datasets, just like any other compressed dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4421/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4421/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/503/comments | https://api.github.com/repos/huggingface/datasets/issues/503/events | https://github.com/huggingface/datasets/pull/503 | 678,726,538 | MDExOlB1bGxSZXF1ZXN0NDY3NjI3MTEw | 503 | CompGuessWhat?! 0.2.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4",
"events_url": "https://api.github.com/users/aleSuglia/events{/privacy}",
"followers_url": "https://api.github.com/users/aleSuglia/followers",
"following_url": "https://api.github.com/users/aleSuglia/following{/other_user}",
"gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aleSuglia",
"id": 1479733,
"login": "aleSuglia",
"node_id": "MDQ6VXNlcjE0Nzk3MzM=",
"organizations_url": "https://api.github.com/users/aleSuglia/orgs",
"received_events_url": "https://api.github.com/users/aleSuglia/received_events",
"repos_url": "https://api.github.com/users/aleSuglia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aleSuglia"
} | [] | closed | false | null | [] | null | [
"I don't see any significant change in the dataset script (except the version value update), can you check that again please ?",
"Hi @aleSuglia , can you check that all the changes you wanted to do are in the dataset script ?",
"Hey sorry but I'm in the middle of a conference deadline. I'll let you know asap!",
"Ok np :)\r\nGood luck with your work for the conference",
"I finally managed to find some time to complete this. The only weird thing about this release is that I had to run the tests with the ignore checksum flag. Could it be because the Dropbox link doesn't change but the file does? Sorry didn't have the time to check the code to see what's happening behind the scenes.\r\n",
"Yes if the file changed, then the checksum verification won't pass as it expects to see the checksum of the old file.\r\nThe checksum is computed by hashing the complete file.\r\nYou can update the checksum by doing \r\n\r\n```\r\nnlp-cli test ./datasets/compguesswhat --save_infos --all_configs\r\n```",
"Any updates on this?",
"Hi :)\r\n\r\nI think what's left to do is\r\n1- rebase from master, since we changed the name of the library\r\n2- update the metadata file of the dataset using the command \r\n```\r\ndatasets-cli test ./datasets/compguesswhat --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\nThis command should update the checksum of the dropbox file",
"That's perfect. I'll have a look at it later today!",
"Nice thanks !",
"@lhoestq not sure why the quality check doesn't pass. Unfortunately CircleCI doesn't show the actual error. If I run `black` on my machine it works just fine. Ideas?",
"@lhoestq any updates? :) ",
"Your version of `black` might be outdated, or you run using `black` instead of `make style` since it reformatted 100+ files.\r\nCould you try to update black, then `make style` ?",
"Yes I think my versions of isort and black were outdated. Thanks @lhoestq :)\r\n",
"It still doesn't look right in terms of line-length.\r\nAre you running `black` or `make style` ?",
"I'm running `make style`. This is the output of the command:\r\n\r\n```\r\nblack --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! ✨ 🍰 ✨\r\n250 files left unchanged.\r\nisort tests src benchmarks datasets metrics\r\n```",
"Weird I have the same output without file changes with black `20.8b1` and isort `5.6.4` using `make style` too",
"I think that's because black doesn't revert the changes you first did with the old version.\r\nCould you open a new PR with only the ComGuessWhat files updated ? Hopefully now that black is up to date it should work directly (and to avoid 100+ files changes)",
"I will have a look at it tomorrow. Thanks for your help!",
"I'm closing this one and I'll open a new one."
] | "2020-08-13T20:51:26Z" | "2020-10-21T06:54:29Z" | "2020-10-21T06:54:29Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/503.diff",
"html_url": "https://github.com/huggingface/datasets/pull/503",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/503.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/503"
} | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/503/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/503/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5446 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5446/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5446/comments | https://api.github.com/repos/huggingface/datasets/issues/5446/events | https://github.com/huggingface/datasets/pull/5446 | 1,550,591,588 | PR_kwDODunzps5IMyka | 5,446 | test v0.12.0.rc0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@Wauplin I was testing it in a dedicated branch without opening a PR: https://github.com/huggingface/datasets/commits/test-hfh-0.12.0rc0",
"Oops, sorry @albertvillanova. I thought for next time I'll start the CIs before pinging everyone.\r\nI'm closing this one.",
"@Wauplin in your Slack message, you asked people from every major dependent library to check that our CI work. That is why I am checking it... :)\r\n\r\nAlso, I think for this purpose it is better to test it in a dedicated branch, rather than opening and closing a PR.",
"Yes, yes I know. Completely my fault on this one"
] | "2023-01-20T10:05:19Z" | "2023-01-20T10:43:22Z" | "2023-01-20T10:13:48Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5446.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5446",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5446.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5446"
} | DO NOT MERGE.
Only to test the CI.
cc @lhoestq @albertvillanova | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5446/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5446/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4159/comments | https://api.github.com/repos/huggingface/datasets/issues/4159/events | https://github.com/huggingface/datasets/pull/4159 | 1,202,522,153 | PR_kwDODunzps42Izmd | 4,159 | Add `TruthfulQA` dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/41410219?v=4",
"events_url": "https://api.github.com/users/jon-tow/events{/privacy}",
"followers_url": "https://api.github.com/users/jon-tow/followers",
"following_url": "https://api.github.com/users/jon-tow/following{/other_user}",
"gists_url": "https://api.github.com/users/jon-tow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jon-tow",
"id": 41410219,
"login": "jon-tow",
"node_id": "MDQ6VXNlcjQxNDEwMjE5",
"organizations_url": "https://api.github.com/users/jon-tow/orgs",
"received_events_url": "https://api.github.com/users/jon-tow/received_events",
"repos_url": "https://api.github.com/users/jon-tow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jon-tow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jon-tow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jon-tow"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Bump. (I'm not sure which reviewer to `@` but, previously, @lhoestq has been very helpful 🤗 )"
] | "2022-04-12T23:19:04Z" | "2022-06-08T15:51:33Z" | "2022-06-08T14:43:34Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4159.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4159",
"merged_at": "2022-06-08T14:43:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4159.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4159"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4159/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4159/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6161/comments | https://api.github.com/repos/huggingface/datasets/issues/6161/events | https://github.com/huggingface/datasets/pull/6161 | 1,855,794,354 | PR_kwDODunzps5YM0g7 | 6,161 | Fix protocol prefix for Beam | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | open | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006736 / 0.011353 (-0.004617) | 0.004099 / 0.011008 (-0.006909) | 0.084339 / 0.038508 (0.045831) | 0.073715 / 0.023109 (0.050605) | 0.311962 / 0.275898 (0.036064) | 0.356108 / 0.323480 (0.032628) | 0.005321 / 0.007986 (-0.002665) | 0.003390 / 0.004328 (-0.000939) | 0.064622 / 0.004250 (0.060372) | 0.053978 / 0.037052 (0.016926) | 0.328967 / 0.258489 (0.070478) | 0.370506 / 0.293841 (0.076665) | 0.031123 / 0.128546 (-0.097423) | 0.008465 / 0.075646 (-0.067181) | 0.288136 / 0.419271 (-0.131136) | 0.052909 / 0.043533 (0.009376) | 0.325189 / 0.255139 (0.070050) | 0.360112 / 0.283200 (0.076912) | 0.023389 / 0.141683 (-0.118294) | 1.492899 / 1.452155 (0.040744) | 1.586449 / 1.492716 (0.093733) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219708 / 0.018006 (0.201702) | 0.469550 / 0.000490 (0.469060) | 0.002776 / 0.000200 (0.002576) | 0.000084 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028985 / 0.037411 (-0.008427) | 0.083487 / 0.014526 (0.068961) | 0.096938 / 0.176557 (-0.079619) | 0.152886 / 0.737135 (-0.584249) | 0.096242 / 0.296338 (-0.200096) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.381959 / 0.215209 (0.166750) | 3.800033 / 2.077655 (1.722378) | 1.831903 / 1.504120 (0.327783) | 1.663207 / 1.541195 (0.122012) | 1.747282 / 1.468490 (0.278792) | 0.481671 / 4.584777 (-4.103106) | 3.653725 / 3.745712 (-0.091987) | 3.253058 / 5.269862 (-2.016804) | 2.022014 / 4.565676 (-2.543663) | 0.056651 / 0.424275 (-0.367624) | 0.007640 / 0.007607 (0.000033) | 0.461795 / 0.226044 (0.235750) | 4.625535 / 2.268929 (2.356606) | 2.356341 / 55.444624 (-53.088283) | 1.977437 / 6.876477 (-4.899040) | 2.179672 / 2.142072 (0.037599) | 0.582875 / 4.805227 (-4.222353) | 0.132964 / 6.500664 (-6.367700) | 0.060398 / 0.075469 (-0.015071) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.309567 / 1.841788 (-0.532220) | 19.856306 / 8.074308 (11.781997) | 14.074350 / 10.191392 (3.882958) | 0.149615 / 0.680424 (-0.530809) | 0.018487 / 0.534201 (-0.515714) | 0.393995 / 0.579283 (-0.185288) | 0.409057 / 0.434364 (-0.025307) | 0.459551 / 0.540337 (-0.080787) | 0.644594 / 1.386936 (-0.742342) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006824 / 0.011353 (-0.004529) | 0.004099 / 0.011008 (-0.006909) | 0.064415 / 0.038508 (0.025907) | 0.077983 / 0.023109 (0.054874) | 0.359351 / 0.275898 (0.083453) | 0.395168 / 0.323480 (0.071688) | 0.005384 / 0.007986 (-0.002602) | 0.003298 / 0.004328 (-0.001030) | 0.065041 / 0.004250 (0.060791) | 0.056717 / 0.037052 (0.019664) | 0.366882 / 0.258489 (0.108393) | 0.401337 / 0.293841 (0.107496) | 0.032273 / 0.128546 (-0.096273) | 0.008666 / 0.075646 (-0.066981) | 0.071442 / 0.419271 (-0.347829) | 0.049999 / 0.043533 (0.006466) | 0.365001 / 0.255139 (0.109862) | 0.379579 / 0.283200 (0.096379) | 0.023357 / 0.141683 (-0.118326) | 1.476839 / 1.452155 (0.024684) | 1.541703 / 1.492716 (0.048987) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239014 / 0.018006 (0.221008) | 0.460678 / 0.000490 (0.460188) | 0.003368 / 0.000200 (0.003168) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030981 / 0.037411 (-0.006430) | 0.088287 / 0.014526 (0.073761) | 0.102459 / 0.176557 (-0.074098) | 0.154695 / 0.737135 (-0.582441) | 0.103479 / 0.296338 (-0.192860) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416084 / 0.215209 (0.200874) | 4.128365 / 2.077655 (2.050710) | 2.113053 / 1.504120 (0.608934) | 1.948993 / 1.541195 (0.407798) | 2.035609 / 1.468490 (0.567119) | 0.481705 / 4.584777 (-4.103072) | 3.630366 / 3.745712 (-0.115346) | 3.340837 / 5.269862 (-1.929024) | 2.052573 / 4.565676 (-2.513104) | 0.056805 / 0.424275 (-0.367470) | 0.007294 / 0.007607 (-0.000313) | 0.489597 / 0.226044 (0.263553) | 4.892728 / 2.268929 (2.623799) | 2.564692 / 55.444624 (-52.879932) | 2.251964 / 6.876477 (-4.624513) | 2.457912 / 2.142072 (0.315839) | 0.588433 / 4.805227 (-4.216794) | 0.133588 / 6.500664 (-6.367076) | 0.062298 / 0.075469 (-0.013171) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.328566 / 1.841788 (-0.513222) | 20.145568 / 8.074308 (12.071260) | 14.231306 / 10.191392 (4.039914) | 0.168356 / 0.680424 (-0.512067) | 0.018333 / 0.534201 (-0.515868) | 0.390901 / 0.579283 (-0.188382) | 0.415005 / 0.434364 (-0.019359) | 0.477282 / 0.540337 (-0.063055) | 0.652085 / 1.386936 (-0.734851) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#341a41880a70b29f030caa0d36f1e297535ba5f9 \"CML watermark\")\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6161). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006388 / 0.011353 (-0.004965) | 0.003917 / 0.011008 (-0.007092) | 0.087397 / 0.038508 (0.048889) | 0.068522 / 0.023109 (0.045412) | 0.313299 / 0.275898 (0.037401) | 0.342884 / 0.323480 (0.019405) | 0.005216 / 0.007986 (-0.002770) | 0.003293 / 0.004328 (-0.001035) | 0.067474 / 0.004250 (0.063224) | 0.051122 / 0.037052 (0.014070) | 0.326443 / 0.258489 (0.067954) | 0.355744 / 0.293841 (0.061903) | 0.031130 / 0.128546 (-0.097416) | 0.008617 / 0.075646 (-0.067029) | 0.291201 / 0.419271 (-0.128070) | 0.052050 / 0.043533 (0.008517) | 0.312135 / 0.255139 (0.056996) | 0.347233 / 0.283200 (0.064034) | 0.023775 / 0.141683 (-0.117907) | 1.478807 / 1.452155 (0.026652) | 1.581239 / 1.492716 (0.088522) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208252 / 0.018006 (0.190246) | 0.466314 / 0.000490 (0.465824) | 0.004439 / 0.000200 (0.004239) | 0.000104 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027918 / 0.037411 (-0.009494) | 0.082410 / 0.014526 (0.067884) | 0.094231 / 0.176557 (-0.082326) | 0.150189 / 0.737135 (-0.586946) | 0.095404 / 0.296338 (-0.200935) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.382026 / 0.215209 (0.166817) | 3.822213 / 2.077655 (1.744559) | 1.833716 / 1.504120 (0.329596) | 1.666250 / 1.541195 (0.125055) | 1.703350 / 1.468490 (0.234860) | 0.477918 / 4.584777 (-4.106859) | 3.629304 / 3.745712 (-0.116408) | 3.199672 / 5.269862 (-2.070190) | 1.977855 / 4.565676 (-2.587821) | 0.056275 / 0.424275 (-0.368000) | 0.007538 / 0.007607 (-0.000070) | 0.455995 / 0.226044 (0.229950) | 4.559234 / 2.268929 (2.290305) | 2.333819 / 55.444624 (-53.110805) | 2.006851 / 6.876477 (-4.869625) | 2.150683 / 2.142072 (0.008611) | 0.576786 / 4.805227 (-4.228441) | 0.132352 / 6.500664 (-6.368312) | 0.059359 / 0.075469 (-0.016110) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.261525 / 1.841788 (-0.580262) | 19.174957 / 8.074308 (11.100649) | 14.286796 / 10.191392 (4.095404) | 0.144610 / 0.680424 (-0.535813) | 0.018213 / 0.534201 (-0.515988) | 0.390404 / 0.579283 (-0.188879) | 0.404678 / 0.434364 (-0.029686) | 0.455636 / 0.540337 (-0.084701) | 0.620801 / 1.386936 (-0.766135) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006383 / 0.011353 (-0.004970) | 0.003852 / 0.011008 (-0.007156) | 0.064116 / 0.038508 (0.025607) | 0.068920 / 0.023109 (0.045810) | 0.359439 / 0.275898 (0.083541) | 0.388904 / 0.323480 (0.065425) | 0.005192 / 0.007986 (-0.002794) | 0.003233 / 0.004328 (-0.001095) | 0.064589 / 0.004250 (0.060339) | 0.054496 / 0.037052 (0.017444) | 0.368699 / 0.258489 (0.110210) | 0.400420 / 0.293841 (0.106579) | 0.030869 / 0.128546 (-0.097677) | 0.008424 / 0.075646 (-0.067222) | 0.071015 / 0.419271 (-0.348257) | 0.048333 / 0.043533 (0.004801) | 0.360652 / 0.255139 (0.105513) | 0.393534 / 0.283200 (0.110334) | 0.022685 / 0.141683 (-0.118998) | 1.495565 / 1.452155 (0.043410) | 1.537947 / 1.492716 (0.045230) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232911 / 0.018006 (0.214905) | 0.454191 / 0.000490 (0.453702) | 0.005711 / 0.000200 (0.005511) | 0.000117 / 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029486 / 0.037411 (-0.007925) | 0.087249 / 0.014526 (0.072724) | 0.100104 / 0.176557 (-0.076453) | 0.151556 / 0.737135 (-0.585580) | 0.100853 / 0.296338 (-0.195485) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415134 / 0.215209 (0.199925) | 4.139068 / 2.077655 (2.061413) | 2.121079 / 1.504120 (0.616959) | 1.945616 / 1.541195 (0.404421) | 1.988188 / 1.468490 (0.519698) | 0.483994 / 4.584777 (-4.100783) | 3.640366 / 3.745712 (-0.105347) | 3.218896 / 5.269862 (-2.050966) | 2.015527 / 4.565676 (-2.550149) | 0.056946 / 0.424275 (-0.367329) | 0.007262 / 0.007607 (-0.000345) | 0.486075 / 0.226044 (0.260031) | 4.864191 / 2.268929 (2.595262) | 2.590853 / 55.444624 (-52.853772) | 2.315359 / 6.876477 (-4.561118) | 2.418733 / 2.142072 (0.276661) | 0.582378 / 4.805227 (-4.222849) | 0.134097 / 6.500664 (-6.366568) | 0.060797 / 0.075469 (-0.014672) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.337021 / 1.841788 (-0.504766) | 19.468907 / 8.074308 (11.394599) | 14.348874 / 10.191392 (4.157482) | 0.170408 / 0.680424 (-0.510016) | 0.018414 / 0.534201 (-0.515787) | 0.394551 / 0.579283 (-0.184732) | 0.404750 / 0.434364 (-0.029613) | 0.471972 / 0.540337 (-0.068365) | 0.650607 / 1.386936 (-0.736329) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ab4d978e2d5c246dc91e2fed041b06a38190be3b \"CML watermark\")\n",
"The CI errors are unrelated to the changes"
] | "2023-08-17T22:40:37Z" | "2023-08-18T13:47:59Z" | null | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6161.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6161",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6161.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6161"
} | Fix #6147 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6161/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1191/comments | https://api.github.com/repos/huggingface/datasets/issues/1191/events | https://github.com/huggingface/datasets/pull/1191 | 757,836,654 | MDExOlB1bGxSZXF1ZXN0NTMzMTMyNTg1 | 1,191 | Added Translator Human Parity Data For a Chinese-English news transla… | {
"avatar_url": "https://avatars.githubusercontent.com/u/7915719?v=4",
"events_url": "https://api.github.com/users/leoxzhao/events{/privacy}",
"followers_url": "https://api.github.com/users/leoxzhao/followers",
"following_url": "https://api.github.com/users/leoxzhao/following{/other_user}",
"gists_url": "https://api.github.com/users/leoxzhao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leoxzhao",
"id": 7915719,
"login": "leoxzhao",
"node_id": "MDQ6VXNlcjc5MTU3MTk=",
"organizations_url": "https://api.github.com/users/leoxzhao/orgs",
"received_events_url": "https://api.github.com/users/leoxzhao/received_events",
"repos_url": "https://api.github.com/users/leoxzhao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leoxzhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leoxzhao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leoxzhao"
} | [] | closed | false | null | [] | null | [
"Can you run `make style` to format the code and fix the CI please ?",
"> Can you run `make style` to format the code and fix the CI please ?\r\n\r\nI ran `make style` before this PR and just a few minutes ago. No changes to the code. Not sure why the CI is failing.",
"Also, I attempted to see if I can get the source Chinese sentences from `wmt17` dataset. But this call `data = load_dataset('wmt17', \"zh-en\")` failed with this error: `FileNotFoundError: Couldn't find file at https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz`. I think it should be possible and fairly straightforward to get the pairing source sentences from it. I just can not test it right now.",
"The `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine",
"merging since the CI is fixed on master"
] | "2020-12-06T03:34:13Z" | "2020-12-09T13:22:45Z" | "2020-12-09T13:22:45Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1191.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1191",
"merged_at": "2020-12-09T13:22:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1191.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1191"
} | …tion system from Open dataset list for Dataset sprint, Microsoft Datasets tab. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1191/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1191/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2547 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2547/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2547/comments | https://api.github.com/repos/huggingface/datasets/issues/2547/events | https://github.com/huggingface/datasets/issues/2547 | 929,192,329 | MDU6SXNzdWU5MjkxOTIzMjk= | 2,547 | Dataset load_from_disk is too slow | {
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avacaondata",
"id": 35173563,
"login": "avacaondata",
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avacaondata"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Hi ! It looks like an issue with the virtual disk you are using.\r\n\r\nWe load datasets using memory mapping. In general it makes it possible to load very big files instantaneously since it doesn't have to read the file (it just assigns virtual memory to the file on disk).\r\nHowever there happens to be issues with virtual disks (for example on spot instances), for which memory mapping does a pass over the entire file, and this takes a while. We are discussing about this issue here: #2252 \r\n\r\nMemory mapping is something handled by the OS so we can't do much about it, though we're still trying to figure out what's causing this behavior exactly to see what we can do.",
"Okay, that's exactly my case, with spot instances... Therefore this isn't something we can change in any way to be able to load the dataset faster? I mean, what do you do internally at huggingface for being able to use spot instances with datasets efficiently?",
"There are no solutions yet unfortunately.\r\nWe're still trying to figure out a way to make the loading instantaneous on such disks, I'll keep you posted"
] | "2021-06-24T12:45:44Z" | "2021-06-25T14:56:38Z" | null | NONE | null | null | null | @lhoestq
## Describe the bug
It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in the context of a language model training, therefore I'm wasting 100$ each time I have to load the dataset from disk again (because the spot instance was stopped by aws and I need to relaunch it for example).
## Steps to reproduce the bug
Just get the oscar in spanish (around 150GGB) and try to first save in disk and then load the processed dataset. It's not dependent on the task you're doing, it just depends on the size of the text dataset.
## Expected results
I expect the dataset to be loaded in a normal time, by using the whole machine for loading it, I mean if you store the dataset in multiple files (.arrow) and then load it from multiple files, you can use multiprocessing for that and therefore don't waste so much time.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Ubuntu 18
- Python version: 3.8
I've seen you're planning to include a streaming mode for load_dataset, but that only saves the downloading and processing time, that's not being a problem for me, you cannot save the pure loading from disk time, therefore that's not a solution for my use case or for anyone who wants to use your library for training a language model. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2547/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2547/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1533 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1533/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1533/comments | https://api.github.com/repos/huggingface/datasets/issues/1533/events | https://github.com/huggingface/datasets/pull/1533 | 764,835,913 | MDExOlB1bGxSZXF1ZXN0NTM4NzE4MDAz | 1,533 | add id_panl_bppt, a parallel corpus for en-id | {
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cahya-wirawan",
"id": 7669893,
"login": "cahya-wirawan",
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cahya-wirawan"
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq, thanks for the review. I will have a look and update it accordingly.",
"Strange error message :-)\r\n\r\n```\r\n> tf_context = tf.python.context.context() # eager mode context\r\nE AttributeError: module 'tensorflow' has no attribute 'python'\r\n```\r\n"
] | "2020-12-13T03:11:27Z" | "2020-12-21T10:40:36Z" | "2020-12-21T10:40:36Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1533.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1533",
"merged_at": "2020-12-21T10:40:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1533.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1533"
} | Parallel Text Corpora for English - Indonesian | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1533/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1533/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1856/comments | https://api.github.com/repos/huggingface/datasets/issues/1856/events | https://github.com/huggingface/datasets/issues/1856 | 805,360,200 | MDU6SXNzdWU4MDUzNjAyMDA= | 1,856 | load_dataset("amazon_polarity") NonMatchingChecksumError | {
"avatar_url": "https://avatars.githubusercontent.com/u/19946372?v=4",
"events_url": "https://api.github.com/users/yanxi0830/events{/privacy}",
"followers_url": "https://api.github.com/users/yanxi0830/followers",
"following_url": "https://api.github.com/users/yanxi0830/following{/other_user}",
"gists_url": "https://api.github.com/users/yanxi0830/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yanxi0830",
"id": 19946372,
"login": "yanxi0830",
"node_id": "MDQ6VXNlcjE5OTQ2Mzcy",
"organizations_url": "https://api.github.com/users/yanxi0830/orgs",
"received_events_url": "https://api.github.com/users/yanxi0830/received_events",
"repos_url": "https://api.github.com/users/yanxi0830/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yanxi0830/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanxi0830/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yanxi0830"
} | [] | closed | false | null | [] | null | [
"Hi ! This issue may be related to #996 \r\nThis comes probably from the Quota Exceeded error from Google Drive.\r\nCan you try again tomorrow and see if you still have the error ?\r\n\r\nOn my side I didn't get any error today with `load_dataset(\"amazon_polarity\")`",
"+1 encountering this issue as well",
"@lhoestq Hi! I encounter the same error when loading `yelp_review_full`.\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset_yp = load_dataset(\"yelp_review_full\")\r\n```\r\n\r\nWhen you say the \"Quota Exceeded from Google drive\". Is this a quota from the dataset owner? or the quota from our (the runner) Google Drive?",
"+1 Also encountering this issue",
"> When you say the \"Quota Exceeded from Google drive\". Is this a quota from the dataset owner? or the quota from our (the runner) Google Drive?\r\n\r\nEach file on Google Drive can be downloaded only a certain amount of times per day because of a quota. The quota is reset every day. So if too many people download the dataset the same day, then the quota is likely to exceed.\r\nThat's a really bad limitations of Google Drive and we should definitely find another host for these dataset than Google Drive.\r\nFor now I would suggest to wait and try again later..\r\n\r\nSo far the issue happened with CNN DailyMail, Amazon Polarity and Yelp Reviews. \r\nAre you experiencing the issue with other datasets ? @calebchiam @dtch1997 ",
"@lhoestq Gotcha, that is quite problematic...for what it's worth, I've had no issues with the other datasets I tried, such as `yelp_reviews_full` and `amazon_reviews_multi`.",
"Same issue today with \"big_patent\", though the symptoms are slightly different.\r\n\r\nWhen running\r\n\r\n```py\r\nfrom datasets import load_dataset\r\nload_dataset(\"big_patent\", split=\"validation\")\r\n```\r\n\r\nI get the following\r\n`FileNotFoundError: Local file \\huggingface\\datasets\\downloads\\6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5\\bigPatentData\\train.tar.gz doesn't exist`\r\n\r\nI had to look into `6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5` (which is a file instead of a folder) and got the following:\r\n\r\n`<!DOCTYPE html><html><head><title>Google Drive - Quota exceeded</title><meta http-equiv=\"content-type\" content=\"text/html; charset=utf-8\"/><link href=/static/doclist/client/css/4033072956-untrustedcontent.css rel=\"stylesheet\" nonce=\"JV0t61Smks2TEKdFCGAUFA\"><link rel=\"icon\" href=\"//ssl.gstatic.com/images/branding/product/1x/drive_2020q4_32dp.png\"/><style nonce=\"JV0t61Smks2TEKdFCGAUFA\">#gbar,#guser{font-size:13px;padding-top:0px !important;}#gbar{height:22px}#guser{padding-bottom:7px !important;text-align:right}.gbh,.gbd{border-top:1px solid #c9d7f1;font-size:1px}.gbh{height:0;position:absolute;top:24px;width:100%}@media all{.gb1{height:22px;margin-right:.5em;vertical-align:top}#gbar{float:left}}a.gb1,a.gb4{text-decoration:underline !important}a.gb1,a.gb4{color:#00c !important}.gbi .gb4{color:#dd8e27 !important}.gbf .gb4{color:#900 !important}\r\n</style><script nonce=\"iNUHigT+ENVQ3UZrLkFtRw\"></script></head><body><div id=gbar><nobr><a target=_blank class=gb1 href=\"https://www.google.fr/webhp?tab=ow\">Search</a> <a target=_blank class=gb1 href=\"http://www.google.fr/imghp?hl=en&tab=oi\">Images</a> <a target=_blank class=gb1 href=\"https://maps.google.fr/maps?hl=en&tab=ol\">Maps</a> <a target=_blank class=gb1 href=\"https://play.google.com/?hl=en&tab=o8\">Play</a> <a target=_blank class=gb1 href=\"https://www.youtube.com/?gl=FR&tab=o1\">YouTube</a> <a target=_blank class=gb1 href=\"https://news.google.com/?tab=on\">News</a> <a target=_blank class=gb1 href=\"https://mail.google.com/mail/?tab=om\">Gmail</a> <b class=gb1>Drive</b> <a target=_blank class=gb1 style=\"text-decoration:none\" href=\"https://www.google.fr/intl/en/about/products?tab=oh\"><u>More</u> »</a></nobr></div><div id=guser width=100%><nobr><span id=gbn class=gbi></span><span id=gbf class=gbf></span><span id=gbe></span><a target=\"_self\" href=\"/settings?hl=en_US\" class=gb4>Settings</a> | <a target=_blank href=\"//support.google.com/drive/?p=web_home&hl=en_US\" class=gb4>Help</a> | <a target=_top id=gb_70 href=\"https://accounts.google.com/ServiceLogin?hl=en&passive=true&continue=https://drive.google.com/uc%3Fexport%3Ddownload%26id%3D1J3mucMFTWrgAYa3LuBZoLRR3CzzYD3fa&service=writely&ec=GAZAMQ\" class=gb4>Sign in</a></nobr></div><div class=gbh style=left:0></div><div class=gbh style=right:0></div><div class=\"uc-main\"><div id=\"uc-text\"><p class=\"uc-error-caption\">Sorry, you can't view or download this file at this time.</p><p class=\"uc-error-subcaption\">Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.</p></div></div><div class=\"uc-footer\"><hr class=\"uc-footer-divider\">© 2021 Google - <a class=\"goog-link\" href=\"//support.google.com/drive/?p=web_home\">Help</a> - <a class=\"goog-link\" href=\"//support.google.com/drive/bin/answer.py?hl=en_US&answer=2450387\">Privacy & Terms</a></div></body></html>`",
"A similar issue arises when trying to stream the dataset\r\n\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> iter_dset = load_dataset(\"amazon_polarity\", split=\"test\", streaming=True)\r\n>>> iter(iter_dset).__next__()\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n~\\lib\\tarfile.py in nti(s)\r\n 186 s = nts(s, \"ascii\", \"strict\")\r\n--> 187 n = int(s.strip() or \"0\", 8)\r\n 188 except ValueError:\r\n\r\nValueError: invalid literal for int() with base 8: 'e nonce='\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nInvalidHeaderError Traceback (most recent call last)\r\n~\\lib\\tarfile.py in next(self)\r\n 2288 try:\r\n-> 2289 tarinfo = self.tarinfo.fromtarfile(self)\r\n 2290 except EOFHeaderError as e:\r\n\r\n~\\lib\\tarfile.py in fromtarfile(cls, tarfile)\r\n 1094 buf = tarfile.fileobj.read(BLOCKSIZE)\r\n-> 1095 obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)\r\n 1096 obj.offset = tarfile.fileobj.tell() - BLOCKSIZE\r\n\r\n~\\lib\\tarfile.py in frombuf(cls, buf, encoding, errors)\r\n 1036\r\n-> 1037 chksum = nti(buf[148:156])\r\n 1038 if chksum not in calc_chksums(buf):\r\n\r\n~\\lib\\tarfile.py in nti(s)\r\n 188 except ValueError:\r\n--> 189 raise InvalidHeaderError(\"invalid header\")\r\n 190 return n\r\n\r\nInvalidHeaderError: invalid header\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nReadError Traceback (most recent call last)\r\n<ipython-input-5-6b9058341b2b> in <module>\r\n----> 1 iter(iter_dset).__next__()\r\n\r\n~\\lib\\site-packages\\datasets\\iterable_dataset.py in __iter__(self)\r\n 363\r\n 364 def __iter__(self):\r\n--> 365 for key, example in self._iter():\r\n 366 if self.features:\r\n 367 # we encode the example for ClassLabel feature types for example\r\n\r\n~\\lib\\site-packages\\datasets\\iterable_dataset.py in _iter(self)\r\n 360 else:\r\n 361 ex_iterable = self._ex_iterable\r\n--> 362 yield from ex_iterable\r\n 363\r\n 364 def __iter__(self):\r\n\r\n~\\lib\\site-packages\\datasets\\iterable_dataset.py in __iter__(self)\r\n 77\r\n 78 def __iter__(self):\r\n---> 79 yield from self.generate_examples_fn(**self.kwargs)\r\n 80\r\n 81 def shuffle_data_sources(self, seed: Optional[int]) -> \"ExamplesIterable\":\r\n\r\n~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\amazon_polarity\\56923eeb72030cb6c4ea30c8a4e1162c26b25973475ac1f44340f0ec0f2936f4\\amazon_polarity.py in _generate_examples(self, filepath, files)\r\n 114 def _generate_examples(self, filepath, files):\r\n 115 \"\"\"Yields examples.\"\"\"\r\n--> 116 for path, f in files:\r\n 117 if path == filepath:\r\n 118 lines = (line.decode(\"utf-8\") for line in f)\r\n\r\n~\\lib\\site-packages\\datasets\\utils\\streaming_download_manager.py in __iter__(self)\r\n 616\r\n 617 def __iter__(self):\r\n--> 618 yield from self.generator(*self.args, **self.kwargs)\r\n 619\r\n 620\r\n\r\n~\\lib\\site-packages\\datasets\\utils\\streaming_download_manager.py in _iter_from_urlpath(cls, urlpath, use_auth_token)\r\n 644 ) -> Generator[Tuple, None, None]:\r\n 645 with xopen(urlpath, \"rb\", use_auth_token=use_auth_token) as f:\r\n--> 646 yield from cls._iter_from_fileobj(f)\r\n 647\r\n 648 @classmethod\r\n\r\n~\\lib\\site-packages\\datasets\\utils\\streaming_download_manager.py in _iter_from_fileobj(cls, f)\r\n 624 @classmethod\r\n 625 def _iter_from_fileobj(cls, f) -> Generator[Tuple, None, None]:\r\n--> 626 stream = tarfile.open(fileobj=f, mode=\"r|*\")\r\n 627 for tarinfo in stream:\r\n 628 file_path = tarinfo.name\r\n\r\n~\\lib\\tarfile.py in open(cls, name, mode, fileobj, bufsize, **kwargs)\r\n 1603 stream = _Stream(name, filemode, comptype, fileobj, bufsize)\r\n 1604 try:\r\n-> 1605 t = cls(name, filemode, stream, **kwargs)\r\n 1606 except:\r\n 1607 stream.close()\r\n\r\n~\\lib\\tarfile.py in __init__(self, name, mode, fileobj, format, tarinfo, dereference, ignore_zeros, encoding, errors, pax_headers, debug, errorlevel, copybufsize)\r\n 1484 if self.mode == \"r\":\r\n 1485 self.firstmember = None\r\n-> 1486 self.firstmember = self.next()\r\n 1487\r\n 1488 if self.mode == \"a\":\r\n\r\n~\\lib\\tarfile.py in next(self)\r\n 2299 continue\r\n 2300 elif self.offset == 0:\r\n-> 2301 raise ReadError(str(e))\r\n 2302 except EmptyHeaderError:\r\n 2303 if self.offset == 0:\r\n\r\nReadError: invalid header\r\n\r\n```",
"This error still happens, but for a different reason now: Google Drive returns a warning instead of the dataset.",
"Met the same issue +1",
"Hi ! Thanks for reporting. Google Drive changed the way to bypass the warning message recently.\r\n\r\nThe latest release `1.18.4` fixes this for datasets loaded in a regular way.\r\n\r\nWe opened a PR to fix this recently for streaming mode at #3843 - we'll do a new release once the fix is merged :)",
"Fixed by:\r\n- #3787 \r\n- #3843"
] | "2021-02-10T10:00:56Z" | "2022-03-15T13:55:24Z" | "2022-03-15T13:55:23Z" | NONE | null | null | null | Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError.
To reproduce:
```
load_dataset("amazon_polarity")
```
This will give the following error:
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-3-8559a03fe0f8> in <module>()
----> 1 dataset = load_dataset("amazon_polarity")
3 frames
/usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/u/0/uc?id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM&export=download']
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1856/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1856/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2045/comments | https://api.github.com/repos/huggingface/datasets/issues/2045/events | https://github.com/huggingface/datasets/pull/2045 | 830,351,527 | MDExOlB1bGxSZXF1ZXN0NTkxODc2Mjcz | 2,045 | Preserve column ordering in Dataset.rename_column | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"Not sure why CI isn't triggered.\r\n\r\n@lhoestq Can you please help me with this? ",
"I don't know how to trigger it manually, but an empty commit should do the job"
] | "2021-03-12T18:26:47Z" | "2021-03-16T14:48:05Z" | "2021-03-16T14:35:05Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2045.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2045",
"merged_at": "2021-03-16T14:35:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2045.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2045"
} | Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns:
```python
>>> from datasets import Dataset
>>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]})
>>> d
Dataset({
features: ['sentences', 'label'],
num_rows: 2
})
>>> d.rename_column('sentences', 'text')
Dataset({
features: ['label', 'text'],
num_rows: 2
})
```
This PR fixes this. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2045/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2045/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5336/comments | https://api.github.com/repos/huggingface/datasets/issues/5336/events | https://github.com/huggingface/datasets/pull/5336 | 1,479,649,900 | PR_kwDODunzps5Egzed | 5,336 | Set `IterableDataset.map` param `batch_size` typing as optional | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5336). All of your documentation changes will be reflected on that endpoint.",
"Hi @mariosasko, @lhoestq I was wondering whether we should include `batched` as a `pytest.mark` param for the functions testing `IterableDataset.map` so as to ensure that the changes done in this PR work fine without breaking anything of the actual functionality.\r\n\r\nI've pushed updated tests just for one of the unit testing functions to be run as `pytest tests/test_iterable_dataset.py::test_mapped_examples_iterable -s --durations 0`, but some are still missing `batched` param, it was just to ask you whether we're supposed to do this for the rest of the functions or not, if it's a yes I'll push the commit as it's ready, but didn't want to push extra stuff that may be discarded later!\r\n\r\nThanks :hugs:",
"Thanks for the feedback @lhoestq, I agree with keeping `Optional` instead of `Union[type, None]` for now 👍🏻"
] | "2022-12-06T17:08:10Z" | "2022-12-07T14:14:56Z" | "2022-12-07T14:06:27Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5336.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5336",
"merged_at": "2022-12-07T14:06:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5336.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5336"
} | This PR solves #5325
~Indeed we're using the typing for optional values as `Union[type, None]` as it's similar to how Python 3.10 handles optional values as `type | None`, instead of using `Optional[type]`.~
~Do we want to start using `Union[type, None]` for type-hinting optional values or just keep on using `Optional`?~ -> Keeping `Optional` still for consistency with the rest of the code in `datasets`
Also we now allow `batch_size` to be `None` for `IterableDataset.map` and `IterableDataset.filter`e.g. `MappedExamplesIterable` as `map` is internally instantiating those and propagating the `batch_size` param so if it can be `None` for `map` it should also do so for `MappedExamplesIterable`, as well as for `FilteredExamplesIterable` when calling `IterableDataset.filter`.
## TODOs
- [x] Add integration tests
- [x] Handle scenario where `batched=True` and `batch_size=None` or `batch_size<=0` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5336/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5336/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4495/comments | https://api.github.com/repos/huggingface/datasets/issues/4495/events | https://github.com/huggingface/datasets/pull/4495 | 1,271,851,025 | PR_kwDODunzps45sAgO | 4,495 | Fix patching module that doesn't exist | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-06-15T08:17:50Z" | "2022-06-15T16:40:49Z" | "2022-06-15T08:54:09Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4495.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4495",
"merged_at": "2022-06-15T08:54:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4495.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4495"
} | Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true
When trying to patch `scipy.io.loadmat`:
```python
ModuleNotFoundError: No module named 'scipy'
```
Instead it shouldn't raise an error and do nothing
Bug introduced by #4375
Fix https://github.com/huggingface/datasets/issues/4494 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4495/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4495/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4466 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4466/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4466/comments | https://api.github.com/repos/huggingface/datasets/issues/4466/events | https://github.com/huggingface/datasets/pull/4466 | 1,266,159,920 | PR_kwDODunzps45ZLsd | 4,466 | Optimize contiguous shard and select | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I thought of just mentioning the benefits I got. Here's the code that @lhoestq provided:\r\n\r\n```py\r\nimport os\r\nfrom datasets import load_dataset\r\nfrom tqdm.auto import tqdm\r\n\r\nds = load_dataset(\"squad\", split=\"train\")\r\nos.makedirs(\"tmp\")\r\n\r\nnum_shards = 5\r\nfor index in tqdm(range(num_shards)):\r\n size = len(ds) // num_shards\r\n shard = Dataset(ds.data.slice(size * index, size), fingerprint=f\"{ds._fingerprint}_{index}\")\r\n shard.to_json(f\"tmp/data_{index}.jsonl\")\r\n```\r\n\r\nIt is 1.64s. Previously the code was:\r\n\r\n```py\r\nnum_shards = 5\r\nfor index in tqdm(range(num_shards)):\r\n shard = ds.shard(num_shards=num_shards, index=index, contiguous=True)\r\n shard.to_json(f\"tmp/data_{index}.jsonl\")\r\n # upload_to_gcs(f\"tmp/data_{index}.jsonl\")\r\n```\r\n\r\nIt was 2min31s. \r\n\r\nI ran it on my humble MacBook Pro:\r\n\r\n<img width=\"574\" alt=\"image\" src=\"https://user-images.githubusercontent.com/22957388/172864881-f1db489a-2305-47f2-a07f-7d3df610b1b8.png\">\r\n",
"I addressed your comments @albertvillanova , let me know what you think :)"
] | "2022-06-09T13:45:39Z" | "2022-06-14T16:04:30Z" | "2022-06-14T15:54:45Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4466.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4466",
"merged_at": "2022-06-14T15:54:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4466.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4466"
} | Currently `.shard()` and `.select()` always create an indices mapping. However if the requested data are contiguous, it's much more optimized to simply slice the Arrow table instead of building an indices mapping. In particular:
- the shard/select operation will be much faster
- reading speed will be much faster in the resulting dataset, since it won't have to do a lookup step in the indices mapping
Since `.shard()` is also used for `.map()` with `num_proc>1`, it will also significantly improve the reading speed of multiprocessed `.map()` operations
Here is an example of speed-up:
```python
>>> import io
>>> import numpy as np
>>> from datasets import Dataset
>>> ds = Dataset.from_dict({"a": np.random.rand(10_000_000)})
>>> shard = ds.shard(num_shards=4, index=0, contiguous=True) # this calls `.select(range(2_500_000))`
>>> buf = io.BytesIO()
>>> %time dd.to_json(buf)
Creating json from Arrow format: 100%|██████████████████| 100/100 [00:00<00:00, 376.17ba/s]
CPU times: user 258 ms, sys: 9.06 ms, total: 267 ms
Wall time: 266 ms
```
while previously it was
```python
Creating json from Arrow format: 100%|███████████████████| 100/100 [00:03<00:00, 29.41ba/s]
CPU times: user 3.33 s, sys: 69.1 ms, total: 3.39 s
Wall time: 3.4 s
```
In this simple case the speed-up is x10, but @sayakpaul experienced a x100 speed-up on its data when exporting to JSON.
## Implementation details
I mostly improved `.select()`: it now checks if the input corresponds to a contiguous chunk of data and then it slices the main Arrow table (or the indices mapping table if it exists). To check if the input indices are contiguous it checks two possibilities:
- if the indices is of type `range`, it checks that start >= 0 and step = 1
- otherwise in the general case, it iterates over the indices. If all the indices are contiguous then we're good, otherwise we have to build an indices mapping.
Having to iterate over the indices doesn't cause performance issues IMO because:
- either they are contiguous and in this case the cost of iterating over the indices is much less than the cost of creating an indices mapping
- or they are not contiguous, and then iterating generally stops quickly when it first encounters the first indice that is not contiguous. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4466/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4466/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2541/comments | https://api.github.com/repos/huggingface/datasets/issues/2541/events | https://github.com/huggingface/datasets/pull/2541 | 928,529,078 | MDExOlB1bGxSZXF1ZXN0Njc2NTIwNDgx | 2,541 | update discofuse link cc @ekQ | {
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VictorSanh",
"id": 16107619,
"login": "VictorSanh",
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VictorSanh"
} | [] | closed | false | null | [] | null | [
"The CI is failing because the dataset tags for `discofuse` are missing. I'm merging this PR since this is unrelated to this PR, but feel free to open another PR to add the tags here if you have some time:\r\n\r\nhttps://github.com/huggingface/datasets/blob/19408f9fab85c79b966085574cd2da3b90959179/datasets/discofuse/README.md#L1-L5\r\n\r\nThe missing tags are:\r\n```\r\n'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'pretty_name', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n```\r\nThanks again !"
] | "2021-06-23T18:24:58Z" | "2021-06-28T14:34:51Z" | "2021-06-28T14:34:50Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2541.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2541",
"merged_at": "2021-06-28T14:34:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2541.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2541"
} | Updating the discofuse link: https://github.com/google-research-datasets/discofuse/commit/fd4b120cb3dd19a417e7f3b5432010b574b5eeee | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2541/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2541/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6466 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6466/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6466/comments | https://api.github.com/repos/huggingface/datasets/issues/6466/events | https://github.com/huggingface/datasets/issues/6466 | 2,022,601,176 | I_kwDODunzps54jnHY | 6,466 | Can't align optional features of struct | {
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Dref360",
"id": 8976546,
"login": "Dref360",
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"repos_url": "https://api.github.com/users/Dref360/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Dref360"
} | [] | open | false | null | [] | null | [
"Friendly bump, I would be happy to work on this issue once I get the go-ahead from the dev team. "
] | "2023-12-03T15:57:07Z" | "2023-12-11T14:38:34Z" | null | CONTRIBUTOR | null | null | null | ### Describe the bug
Hello!
I'm currently experiencing an issue where I can't concatenate datasets if an inner field of a Feature is Optional.
I have a column named `speaker`, and this holds some information about a speaker.
```python
@dataclass
class Speaker:
name: str
email: Optional[str]
```
If I have two datasets, one happens to have `email` always None, then I get `The features can't be aligned because the key email of features`
### Steps to reproduce the bug
You can run the following script:
```python
ds = Dataset.from_dict({'speaker': [{'name': 'Ben', 'email': None}]})
ds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'email': '[email protected]'}]})
concatenate_datasets([ds, ds2])
>>>The features can't be aligned because the key speaker of features {'speaker': {'email': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)}} has unexpected type - {'email': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)} (expected either {'email': Value(dtype='null', id=None), 'name': Value(dtype='string', id=None)} or Value("null").
```
### Expected behavior
I think this should work; if two top-level columns were in the same situation it would properly cast to `string`.
```python
ds = Dataset.from_dict({'email': [None, None]})
ds2 = Dataset.from_dict({'email': ['[email protected]', '[email protected]']})
concatenate_datasets([ds, ds2])
>>> # Works!
```
### Environment info
- `datasets` version: 2.15.1.dev0
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35
- Python version: 3.9.13
- `huggingface_hub` version: 0.19.4
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
- `fsspec` version: 2023.6.0
I would be happy to fix this issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6466/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6466/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1779/comments | https://api.github.com/repos/huggingface/datasets/issues/1779/events | https://github.com/huggingface/datasets/pull/1779 | 793,539,703 | MDExOlB1bGxSZXF1ZXN0NTYxMjEwNjI5 | 1,779 | Ignore definition line number of functions for caching | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | "2021-01-25T16:42:29Z" | "2021-01-26T10:20:20Z" | "2021-01-26T10:20:19Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1779.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1779",
"merged_at": "2021-01-26T10:20:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1779.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1779"
} | As noticed in #1718 , when a function used for processing with `map` is moved inside its python file, then the change of line number causes the caching mechanism to consider it as a different function. Therefore in this case, it recomputes everything.
This is because we were not ignoring the line number definition for such functions (even though we're doing it for lambda functions).
For example this code currently prints False:
```python
from datasets.fingerprint import Hasher
# define once
def foo(x):
return x
h = Hasher.hash(foo)
# define a second time elsewhere
def foo(x):
return x
print(h == Hasher.hash(foo))
```
I changed this by ignoring the line number for all functions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1779/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1779/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5610 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5610/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5610/comments | https://api.github.com/repos/huggingface/datasets/issues/5610/events | https://github.com/huggingface/datasets/issues/5610 | 1,610,698,006 | I_kwDODunzps5gAU0W | 5,610 | use datasets streaming mode in trainer ddp mode cause memory leak | {
"avatar_url": "https://avatars.githubusercontent.com/u/15223544?v=4",
"events_url": "https://api.github.com/users/gromzhu/events{/privacy}",
"followers_url": "https://api.github.com/users/gromzhu/followers",
"following_url": "https://api.github.com/users/gromzhu/following{/other_user}",
"gists_url": "https://api.github.com/users/gromzhu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gromzhu",
"id": 15223544,
"login": "gromzhu",
"node_id": "MDQ6VXNlcjE1MjIzNTQ0",
"organizations_url": "https://api.github.com/users/gromzhu/orgs",
"received_events_url": "https://api.github.com/users/gromzhu/received_events",
"repos_url": "https://api.github.com/users/gromzhu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gromzhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gromzhu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gromzhu"
} | [] | open | false | null | [] | null | [
"Same problem, \r\ntransformers 4.28.1\r\ndatasets 2.12.0\r\n\r\nleak around 100Mb per 10 seconds when use dataloader_num_werker > 0 in training argumennts for transformer train, possile bug in transformers repo, but still not found solution :(\r\n",
"found an article described a problem, may be helpful for somebody:\r\nhttps://ppwwyyxx.com/blog/2022/Demystify-RAM-Usage-in-Multiprocess-DataLoader/\r\nI confirm, it`s not memory leak, after some time memory growing has stopped"
] | "2023-03-06T05:26:49Z" | "2023-05-07T15:15:32Z" | null | NONE | null | null | null | ### Describe the bug
use datasets streaming mode in trainer ddp mode cause memory leak
### Steps to reproduce the bug
import os
import time
import datetime
import sys
import numpy as np
import random
import torch
from torch.utils.data import Dataset, DataLoader, random_split, RandomSampler, SequentialSampler,DistributedSampler,BatchSampler
torch.manual_seed(42)
from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config, GPT2Model,DataCollatorForLanguageModeling,AutoModelForCausalLM
from transformers import AdamW, get_linear_schedule_with_warmup
hf_model_path ='./Wenzhong-GPT2-110M'
tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path)
tokenizer.add_special_tokens({'pad_token': '<|pad|>'})
from datasets import load_dataset
gpus=8
max_len = 576
batch_size_node = 17
save_step = 5000
gradient_accumulation = 2
dataloader_num = 4
max_step = 351000*1000//batch_size_node//gradient_accumulation//gpus
#max_step = -1
print("total_step:%d"%(max_step))
import datasets
datasets.version
dataset = load_dataset("text", data_files="./gpt_data_v1/*",split='train',cache_dir='./dataset_cache',streaming=True)
print('load over')
shuffled_dataset = dataset.shuffle(seed=42)
print('shuffle over')
def dataset_tokener(example,max_lenth=max_len):
example['text'] = list(map(lambda x : x.strip()+'<|endoftext|>',example['text'] ))
return tokenizer(example['text'], truncation=True, max_length=max_lenth, padding="longest")
new_new_dataset = shuffled_dataset.map(dataset_tokener, batched=True, remove_columns=["text"])
print('map over')
configuration = GPT2Config.from_pretrained(hf_model_path, output_hidden_states=False)
model = AutoModelForCausalLM.from_pretrained(hf_model_path)
model.resize_token_embeddings(len(tokenizer))
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
from transformers import Trainer,TrainingArguments
import os
print("strat train")
training_args = TrainingArguments(output_dir="./test_trainer",
num_train_epochs=1.0,
report_to="none",
do_train=True,
dataloader_num_workers=dataloader_num,
local_rank=int(os.environ.get('LOCAL_RANK', -1)),
overwrite_output_dir=True,
logging_strategy='steps',
logging_first_step=True,
logging_dir="./logs",
log_on_each_node=False,
per_device_train_batch_size=batch_size_node,
warmup_ratio=0.03,
save_steps=save_step,
save_total_limit=5,
gradient_accumulation_steps=gradient_accumulation,
max_steps=max_step,
disable_tqdm=False,
data_seed=42
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=new_new_dataset,
eval_dataset=None,
tokenizer=tokenizer,
data_collator=DataCollatorForLanguageModeling(tokenizer,mlm=False),
#compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None,
#preprocess_logits_for_metrics=preprocess_logits_for_metrics
#if training_args.do_eval and not is_torch_tpu_available()
#else None,
)
trainer.train(resume_from_checkpoint=True)
### Expected behavior
use the train code uppper
my dataset ./gpt_data_v1 have 1000 files, each file size is 120mb
start cmd is : python -m torch.distributed.launch --nproc_per_node=8 my_train.py
here is result:
![image](https://user-images.githubusercontent.com/15223544/223026042-1a81489f-897a-43e4-8339-65a202fd5dc7.png)
here is memory usage monitor in 12 hours
![image](https://user-images.githubusercontent.com/15223544/223027076-14e32e8b-9608-4282-9a80-f15d0277026d.png)
every dataloader work allocate over 24gb cpu memory
according to memory usage monitor in 12 hours,sometime small memory releases, but total memory usage is increase.
i think datasets streaming mode should not used so much memery,so maybe somewhere has memory leak.
### Environment info
pytorch 1.11.0
py 3.8
cuda 11.3
transformers 4.26.1
datasets 2.9.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5610/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5610/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4048/comments | https://api.github.com/repos/huggingface/datasets/issues/4048/events | https://github.com/huggingface/datasets/issues/4048 | 1,183,804,576 | I_kwDODunzps5Gj2yg | 4,048 | Split size error on `amazon_us_reviews` / `PC_v1_00` dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/trentonstrong",
"id": 191985,
"login": "trentonstrong",
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/trentonstrong"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/trentonstrong",
"id": 191985,
"login": "trentonstrong",
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/trentonstrong"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/191985?v=4",
"events_url": "https://api.github.com/users/trentonstrong/events{/privacy}",
"followers_url": "https://api.github.com/users/trentonstrong/followers",
"following_url": "https://api.github.com/users/trentonstrong/following{/other_user}",
"gists_url": "https://api.github.com/users/trentonstrong/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/trentonstrong",
"id": 191985,
"login": "trentonstrong",
"node_id": "MDQ6VXNlcjE5MTk4NQ==",
"organizations_url": "https://api.github.com/users/trentonstrong/orgs",
"received_events_url": "https://api.github.com/users/trentonstrong/received_events",
"repos_url": "https://api.github.com/users/trentonstrong/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/trentonstrong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/trentonstrong/subscriptions",
"type": "User",
"url": "https://api.github.com/users/trentonstrong"
}
] | null | [
"Follow-up: I have confirmed there are no duplicate lines via `sort amazon_reviews_us_PC_v1_00.tsv | uniq -cd` after extracting the raw file.",
"Hi @trentonstrong, thanks for reporting!\r\n\r\nI confirm that loading this dataset configuration throws a `NonMatchingSplitsSizesError`:\r\n```\r\nNonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=350242049, num_examples=785730, dataset_name='amazon_us_reviews'), 'recorded': SplitInfo(name='train', num_bytes=3982712078, num_examples=6908554, dataset_name='amazon_us_reviews')}]\r\n```\r\n\r\nAlso thank you for your offer to fix this. You can find information about how to update the metadata JSON file here: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#automatically-add-code-metadata\r\n```shell\r\ndatasets-cli test datasets/amazon_us_reviews --save_infos --all_configs\r\n```\r\nPlease, feel free to open a PR with this fix. And do not hesitate to ping me if you need any help.",
"No sweat. Will get it patched up ASAP."
] | "2022-03-28T18:12:04Z" | "2022-04-08T12:29:30Z" | "2022-04-08T12:29:30Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
When downloading this subset as of 3-28-2022 you will encounter a split size error after the dataset is extracted. The extracted dataset has roughly ~6m rows while the split expects <1m.
Upon digging a little deeper, I downloaded the raw files from `https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_PC_v1_00.tsv.gz` and extracted them. A line count via `wc -l` confirms the ~6m number that we see and the data looks valid at a glance (I did not check for duplicate rows). My guess is this file has either been updated in place or there is a bug in the dataset metadata.
Happy to submit a PR and fix this up if turns out to be a metadata issue but wanted to get some other :eyes: on it first.
## Steps to reproduce the bug
```python
load_dataset('amazon_us_reviews', 'PC_v1_00')
```
## Expected results
Dataset is downloaded and extracted successfully.
## Actual results
An split size exception is thrown.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4048/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4048/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4127/comments | https://api.github.com/repos/huggingface/datasets/issues/4127/events | https://github.com/huggingface/datasets/pull/4127 | 1,197,297,756 | PR_kwDODunzps4132EN | 4,127 | Add configs with processed data in medical_dialog dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-04-08T13:08:16Z" | "2022-05-06T08:39:50Z" | "2022-04-08T16:20:51Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4127.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4127",
"merged_at": "2022-04-08T16:20:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4127.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4127"
} | There exist processed data files that do not require parsing the raw data files (which can take long time).
Fix #4122. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4127/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4127/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3789/comments | https://api.github.com/repos/huggingface/datasets/issues/3789/events | https://github.com/huggingface/datasets/pull/3789 | 1,150,587,404 | PR_kwDODunzps4zeQpx | 3,789 | Add URL and ID fields to Wikipedia dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Do you think we have a dedicated branch for all the changes we want to do to wikipedia ? Then once everything looks good + we have preprocessed the main languages, we can merge it on the `master` branch",
"Yes, @lhoestq, I agree with you.\r\n\r\nI have just created the dedicated branch [`update-wikipedia`](https://github.com/huggingface/datasets/tree/update-wikipedia). We can merge every PR (once validated) to that branch; once all changes are merged to that branch, we could create the preprocessed datasets and then merge the branch to master. ",
"@lhoestq I guess you approve this PR?"
] | "2022-02-25T15:34:37Z" | "2022-03-04T08:24:24Z" | "2022-03-04T08:24:23Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3789.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3789",
"merged_at": "2022-03-04T08:24:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3789.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3789"
} | This PR adds the URL field, so that we conform to proper attribution, required by their license: provide credit to the authors by including a hyperlink (where possible) or URL to the page or pages you are re-using.
About the conversion from title to URL, I found that apart from replacing blanks with underscores, some other special character must also be percent-encoded (e.g. `"` to `%22`): https://meta.wikimedia.org/wiki/Help:URL
Therefore, I have finally used `urllib.parse.quote` function. This additionally percent-encodes non-ASCII characters, but Wikimedia docs say these are equivalent:
> For the other characters either the code or the character can be used in internal and external links, they are equivalent. The system does a conversion when needed.
> [[%C3%80_propos_de_M%C3%A9ta]]
> is rendered as [À_propos_de_Méta](https://meta.wikimedia.org/wiki/%C3%80_propos_de_M%C3%A9ta), almost like [À propos de Méta](https://meta.wikimedia.org/wiki/%C3%80_propos_de_M%C3%A9ta), which leads to this page on Meta with in the address bar the URL
> [http://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta](https://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta)
> while [http://meta.wikipedia.org/wiki/À_propos_de_Méta](https://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta) leads to the same.
Fix #3398.
CC: @geohci | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3789/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3789/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/875/comments | https://api.github.com/repos/huggingface/datasets/issues/875/events | https://github.com/huggingface/datasets/issues/875 | 748,194,311 | MDU6SXNzdWU3NDgxOTQzMTE= | 875 | bug in boolq dataset loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehk",
"id": 6278280,
"login": "rabeehk",
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehk"
} | [] | closed | false | null | [] | null | [
"I just opened a PR to fix this.\r\nThanks for reporting !"
] | "2020-11-22T08:18:34Z" | "2020-11-24T10:12:33Z" | "2020-11-24T10:12:33Z" | CONTRIBUTOR | null | null | null | Hi
I am trying to load boolq dataset:
```
import datasets
datasets.load_dataset("boolq")
```
I am getting the following errors, thanks for your help
```
>>> import datasets
2020-11-22 09:16:30.070332: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2020-11-22 09:16:30.070389: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
>>> datasets.load_dataset("boolq")
cahce dir /idiap/temp/rkarimi/cache_home/datasets
cahce dir /idiap/temp/rkarimi/cache_home/datasets
Using custom data configuration default
Downloading and preparing dataset boolq/default (download: 8.36 MiB, generated: 7.47 MiB, post-processed: Unknown size, total: 15.83 MiB) to /idiap/temp/rkarimi/cache_home/datasets/boolq/default/0.1.0/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11...
cahce dir /idiap/temp/rkarimi/cache_home/datasets
cahce dir /idiap/temp/rkarimi/cache_home/datasets/downloads
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators
downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 149, in download_custom
custom_download(url, path)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 516, in copy_v2
compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite)
tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/875/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/875/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4896 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4896/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4896/comments | https://api.github.com/repos/huggingface/datasets/issues/4896/events | https://github.com/huggingface/datasets/pull/4896 | 1,351,180,409 | PR_kwDODunzps49z4fU | 4,896 | Fix missing tags in dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-08-25T16:41:43Z" | "2022-09-22T14:37:16Z" | "2022-08-26T04:41:48Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4896.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4896",
"merged_at": "2022-08-26T04:41:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4896.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4896"
} | Fix missing tags in dataset cards:
- anli
- coarse_discourse
- commonsense_qa
- cos_e
- ilist
- lc_quad
- web_questions
- xsum
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4896/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4896/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1248/comments | https://api.github.com/repos/huggingface/datasets/issues/1248/events | https://github.com/huggingface/datasets/pull/1248 | 758,454,438 | MDExOlB1bGxSZXF1ZXN0NTMzNjI0ODY5 | 1,248 | Update step-by-step guide about the dataset cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomwolf",
"id": 7353373,
"login": "thomwolf",
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomwolf"
} | [] | closed | false | null | [] | null | [] | "2020-12-07T12:12:12Z" | "2020-12-07T13:19:24Z" | "2020-12-07T13:19:23Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1248.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1248",
"merged_at": "2020-12-07T13:19:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1248.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1248"
} | Small update in the step-by-step guide about the dataset cards to indicate it can be created and completing while exploring the dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1248/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1248/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4334/comments | https://api.github.com/repos/huggingface/datasets/issues/4334/events | https://github.com/huggingface/datasets/pull/4334 | 1,234,103,477 | PR_kwDODunzps43uguB | 4,334 | Adding eval metadata for billsum | {
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor"
} | [] | closed | false | null | [] | null | [] | "2022-05-12T14:49:08Z" | "2023-09-24T10:02:46Z" | "2022-05-12T14:49:24Z" | NONE | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4334.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4334",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4334.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4334"
} | Adding eval metadata for billsum | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4334/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4334/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2594 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2594/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2594/comments | https://api.github.com/repos/huggingface/datasets/issues/2594/events | https://github.com/huggingface/datasets/pull/2594 | 937,294,772 | MDExOlB1bGxSZXF1ZXN0NjgzODc0NjIz | 2,594 | Fix BibTeX entry | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-07-05T18:24:10Z" | "2021-07-06T04:59:38Z" | "2021-07-06T04:59:38Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2594.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2594",
"merged_at": "2021-07-06T04:59:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2594.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2594"
} | Fix BibTeX entry. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2594/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2594/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1761 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1761/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1761/comments | https://api.github.com/repos/huggingface/datasets/issues/1761/events | https://github.com/huggingface/datasets/pull/1761 | 791,150,858 | MDExOlB1bGxSZXF1ZXN0NTU5MjUyMzEw | 1,761 | Add SILICONE benchmark | {
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"events_url": "https://api.github.com/users/eusip/events{/privacy}",
"followers_url": "https://api.github.com/users/eusip/followers",
"following_url": "https://api.github.com/users/eusip/following{/other_user}",
"gists_url": "https://api.github.com/users/eusip/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eusip",
"id": 1551356,
"login": "eusip",
"node_id": "MDQ6VXNlcjE1NTEzNTY=",
"organizations_url": "https://api.github.com/users/eusip/orgs",
"received_events_url": "https://api.github.com/users/eusip/received_events",
"repos_url": "https://api.github.com/users/eusip/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eusip/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eusip"
} | [] | closed | false | null | [] | null | [
"Thanks for the feedback. All your comments have been addressed!",
"Thank you for your constructive feedback! I now know how to best format future datasets that our team plans to publish in the near future :)",
"Awesome ! Looking forward to it :) ",
"Hi @lhoestq ! One last question. Our research team would like to distribute a link to this dataset amongst the spoken dialogue research community but the dataset does not show in the dropdown menu at huggingface.co. Is there anything else we must do in order to find the dataset there ?\r\n\r\nOnce the dataset does show in the dropdown menu, how can I affiliate it with the Telecom Paris organization that I already created at the website ?",
"The files are not located in the right place in the repo. Let me move them",
"I created a PR at https://github.com/huggingface/datasets/pull/1794",
"I just merged the change @eusip, now the dataset page is available at the url:\r\nhttps://huggingface.co/datasets/silicone",
"Thank you for moving the folder for me :)"
] | "2021-01-21T14:29:12Z" | "2021-02-04T14:32:48Z" | "2021-01-26T13:50:31Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1761.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1761",
"merged_at": "2021-01-26T13:50:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1761.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1761"
} | My collaborators and I within the Affective Computing team at Telecom Paris would like to re-submit our spoken dialogue dataset for publication.
This is a new pull request relative to the [previously closed request](https://github.com/huggingface/datasets/pull/1712) which was reviewed by @lhoestq.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1761/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1761/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4003/comments | https://api.github.com/repos/huggingface/datasets/issues/4003/events | https://github.com/huggingface/datasets/issues/4003 | 1,179,286,877 | I_kwDODunzps5GSn1d | 4,003 | ASSIN2 dataset checksum bug | {
"avatar_url": "https://avatars.githubusercontent.com/u/14352388?v=4",
"events_url": "https://api.github.com/users/ruanchaves/events{/privacy}",
"followers_url": "https://api.github.com/users/ruanchaves/followers",
"following_url": "https://api.github.com/users/ruanchaves/following{/other_user}",
"gists_url": "https://api.github.com/users/ruanchaves/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ruanchaves",
"id": 14352388,
"login": "ruanchaves",
"node_id": "MDQ6VXNlcjE0MzUyMzg4",
"organizations_url": "https://api.github.com/users/ruanchaves/orgs",
"received_events_url": "https://api.github.com/users/ruanchaves/received_events",
"repos_url": "https://api.github.com/users/ruanchaves/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ruanchaves/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ruanchaves/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ruanchaves"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Using latest code, I am still facing the issue.\r\n\r\n```python\r\n(base) vimos@vimosmu ➜ ~ ipython\r\nPython 3.6.7 | packaged by conda-forge | (default, Nov 6 2019, 16:19:42) \r\nType 'copyright', 'credits' or 'license' for more information\r\nIPython 7.11.1 -- An enhanced Interactive Python. Type '?' for help.\r\n\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: load_dataset(\"assin2\")\r\nDownloading builder script: 4.24kB [00:00, 244kB/s]\r\nDownloading metadata: 2.58kB [00:00, 2.19MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset assin2/default (download: 2.02 MiB, generated: 1.21 MiB, post-processed: Unknown size, total: 3.23 MiB) to /home/vimos/.cache/huggingface/datasets/assin2/default/1.0.0/8467f7acbda82f62ab960ca869dc1e96350e0e103a1ef7eaa43bbee530b80061...\r\nDownloading data: 1.51MB [00:00, 102MB/s]\r\nDownloading data: 116kB [00:00, 63.6MB/s]\r\nDownloading data: 493kB [00:00, 95.8MB/s] \r\nDownloading data files: 100%|██████████████████████████████████████████| 3/3 [00:00<00:00, 8.27it/s]\r\n---------------------------------------------------------------------------\r\nExpectedMoreDownloadedFiles Traceback (most recent call last)\r\n<ipython-input-2-b367d1ffd68e> in <module>\r\n----> 1 load_dataset(\"assin2\")\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1694 ignore_verifications=ignore_verifications,\r\n 1695 try_from_hf_gcs=try_from_hf_gcs,\r\n-> 1696 use_auth_token=use_auth_token,\r\n 1697 )\r\n 1698\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 604 if not downloaded_from_gcs:\r\n 605 self._download_and_prepare(\r\n--> 606 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 607 )\r\n 608 # Sync info\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 1102\r\n 1103 def _download_and_prepare(self, dl_manager, verify_infos):\r\n-> 1104 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)\r\n 1105\r\n 1106 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 675 if verify_infos:\r\n 676 verify_checksums(\r\n--> 677 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n 678 )\r\n 679\r\n\r\n~/anaconda3/lib/python3.6/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 31 return\r\n 32 if len(set(expected_checksums) - set(recorded_checksums)) > 0:\r\n---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\n 34 if len(set(recorded_checksums) - set(expected_checksums)) > 0:\r\n 35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums)))\r\n\r\nExpectedMoreDownloadedFiles: {'https://drive.google.com/u/0/uc?id=1kb7xq6Mb3eaqe9cOAo70BaG9ypwkIqEU&export=download', 'https://drive.google.com/u/0/uc?id=1J3FpQaHxpM-FDfBUyooh-sZF-B-bM_lU&export=download', 'https://drive.google.com/u/0/uc?id=1Q9j1a83CuKzsHCGaNulSkNxBm7Dkn7Ln&export=download'}\r\n```",
"That's true. Steps to reproduce the bug on Google Colab:\r\n\r\n```\r\ngit clone https://github.com/huggingface/datasets.git\r\ncd datasets\r\npip install -e .\r\npython -c \"from datasets import load_dataset; print(load_dataset('assin2')['train'][0])\"\r\n```\r\n\r\nHowever the dataset will load without any problems if you just install version 2.0.0:\r\n\r\n ```\r\npip install datasets\r\npython -c \"from datasets import load_dataset; print(load_dataset('assin2')['train'][0])\"\r\n```\r\n\r\nAny thoughts @lhoestq ?",
"Right indeed ! Let me open a PR to fix this.\r\nThe dataset_infos.json file that stores some metadata about the dataset to download (and is used to verify it was correctly downloaded) hasn't been updated correctly",
"Not sure what the status of this is, but personally I am still getting this error, with glue.",
"Can you open a new issue if you got an error with glue please ?",
"Have posted at #4241"
] | "2022-03-24T10:08:50Z" | "2022-04-27T14:14:45Z" | "2022-03-28T13:56:39Z" | CONTRIBUTOR | null | null | null | ## Describe the bug
Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2).
`NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`.
Similar to #3952 , #3942 , #3941 , etc.
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
[<ipython-input-13-c664a92ad5e7>](https://localhost:8080/#) in <module>()
----> 1 load_dataset('assin2')
4 frames
[/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/u/0/uc?id=1Q9j1a83CuKzsHCGaNulSkNxBm7Dkn7Ln&export=download']
```
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("assin2")
```
## Expected results
Load the dataset.
## Actual results
The dataset won't load.
## Environment info
- `datasets` version: 2.0.1.dev0
- Platform: Google Colab
- Python version: 3.7.12
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4003/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4003/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1287/comments | https://api.github.com/repos/huggingface/datasets/issues/1287/events | https://github.com/huggingface/datasets/issues/1287 | 759,300,992 | MDU6SXNzdWU3NTkzMDA5OTI= | 1,287 | 'iwslt2017-ro-nl', cannot be downloaded | {
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rabeehk",
"id": 6278280,
"login": "rabeehk",
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rabeehk"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"the same issue with datasets.load_dataset(\"iwslt2017\", 'iwslt2017-en-nl', split=split), ..... ",
"even with setting master like the following command, still remains \r\n\r\ndatasets.load_dataset(\"iwslt2017\", 'iwslt2017-en-nl', split=\"train\", script_version=\"master\")\r\n",
"Looks like the data has been moved from its original location to google drive\r\n\r\nNew url: https://drive.google.com/u/0/uc?id=12ycYSzLIG253AFN35Y6qoyf9wtkOjakp&export=download",
"Fixed by #4481 "
] | "2020-12-08T09:56:55Z" | "2022-06-13T10:41:33Z" | "2022-06-13T10:41:33Z" | CONTRIBUTOR | null | null | null | Hi
I am trying
`>>> datasets.load_dataset("iwslt2017", 'iwslt2017-ro-nl', split="train")`
getting this error thank you for your help
```
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
Downloading and preparing dataset iwsl_t217/iwslt2017-ro-nl (download: 314.07 MiB, generated: 39.92 MiB, post-processed: Unknown size, total: 354.00 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/iwsl_t217/iwslt2017-ro-nl/1.0.0/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd...
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/iwslt2017/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd/iwslt2017.py", line 118, in _split_generators
dl_dir = dl_manager.download_and_extract(MULTI_URL)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download
num_proc=download_config.num_proc,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 216, in map_nested
return function(data_struct)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1287/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1287/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6152/comments | https://api.github.com/repos/huggingface/datasets/issues/6152/events | https://github.com/huggingface/datasets/issues/6152 | 1,852,494,646 | I_kwDODunzps5uatM2 | 6,152 | FolderBase Dataset automatically resolves under current directory when data_dir is not specified | {
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/npuichigo",
"id": 11533479,
"login": "npuichigo",
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/npuichigo"
} | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/126772439?v=4",
"events_url": "https://api.github.com/users/debrupf2946/events{/privacy}",
"followers_url": "https://api.github.com/users/debrupf2946/followers",
"following_url": "https://api.github.com/users/debrupf2946/following{/other_user}",
"gists_url": "https://api.github.com/users/debrupf2946/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/debrupf2946",
"id": 126772439,
"login": "debrupf2946",
"node_id": "U_kgDOB45k1w",
"organizations_url": "https://api.github.com/users/debrupf2946/orgs",
"received_events_url": "https://api.github.com/users/debrupf2946/received_events",
"repos_url": "https://api.github.com/users/debrupf2946/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/debrupf2946/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/debrupf2946/subscriptions",
"type": "User",
"url": "https://api.github.com/users/debrupf2946"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/126772439?v=4",
"events_url": "https://api.github.com/users/debrupf2946/events{/privacy}",
"followers_url": "https://api.github.com/users/debrupf2946/followers",
"following_url": "https://api.github.com/users/debrupf2946/following{/other_user}",
"gists_url": "https://api.github.com/users/debrupf2946/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/debrupf2946",
"id": 126772439,
"login": "debrupf2946",
"node_id": "U_kgDOB45k1w",
"organizations_url": "https://api.github.com/users/debrupf2946/orgs",
"received_events_url": "https://api.github.com/users/debrupf2946/received_events",
"repos_url": "https://api.github.com/users/debrupf2946/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/debrupf2946/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/debrupf2946/subscriptions",
"type": "User",
"url": "https://api.github.com/users/debrupf2946"
}
] | null | [
"@lhoestq ",
"Makes sense, I guess this can be fixed in the load_dataset_builder method.\r\nIt concerns every packaged builder I think (see values in `_PACKAGED_DATASETS_MODULES`)",
"I think the behavior is related to these lines, which short circuited the error handling.\r\nhttps://github.com/huggingface/datasets/blob/664a1cb72ea1e6ef7c47e671e2686ca4a35e8d63/src/datasets/load.py#L946-L952\r\n\r\nSo should data_dir be checked here or still delegating to actual `DatasetModule`? In that case, how to properly set `data_files` here.",
"This is location in PackagedDatasetModuleFactory.get_module seems the be the right place to check if at least data_dir or data_files are passed",
"@mariosasko can you please assign this issue to me,I want to work on this",
"#self-assign",
"@mariosasko is this issue still open? i would love to kickstart my journey to open source with this issue!\r\nRegards\r\nzutarich",
"@zutarich It is unless @debrupf2946 is working on it."
] | "2023-08-16T04:38:09Z" | "2023-10-10T16:30:19Z" | null | CONTRIBUTOR | null | null | null | ### Describe the bug
FolderBase Dataset automatically resolves under current directory when data_dir is not specified.
For example:
```
load_dataset("audiofolder")
```
takes long time to resolve and collect data_files from current directory. But I think it should reach out to this line for error handling https://github.com/huggingface/datasets/blob/cb8c5de5145c7e7eee65391cb7f4d92f0d565d62/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L58-L59
### Steps to reproduce the bug
```
load_dataset("audiofolder")
```
### Expected behavior
Error report
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6152/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6152/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3463 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3463/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3463/comments | https://api.github.com/repos/huggingface/datasets/issues/3463/events | https://github.com/huggingface/datasets/pull/3463 | 1,085,078,795 | PR_kwDODunzps4wGB4P | 3,463 | Update swahili_news dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | "2021-12-20T18:20:20Z" | "2021-12-21T06:24:03Z" | "2021-12-21T06:24:02Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3463.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3463",
"merged_at": "2021-12-21T06:24:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3463.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3463"
} | Update dataset with latest verion data files.
Fix #3462.
Close bigscience-workshop/data_tooling#107 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3463/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3463/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3845/comments | https://api.github.com/repos/huggingface/datasets/issues/3845/events | https://github.com/huggingface/datasets/pull/3845 | 1,161,739,483 | PR_kwDODunzps40DvqX | 3,845 | add RMSE and MAE metrics. | {
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaveenr",
"id": 17746528,
"login": "dnaveenr",
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaveenr"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3845). All of your documentation changes will be reflected on that endpoint.",
"@mariosasko I've reopened it here. Please suggest any changes if required. Thank you.",
"Thanks for suggestions. :) I have added update the KWARGS_DESCRIPTION for the missing params and also changed RMSE to MSE.\r\nWhile testing, I noticed that when the input is a list of lists, we get an error :\r\n`TypeError: float() argument must be a string or a number, not 'list'`\r\nCould you suggest the datasets.Value() attribute to support both list of floats and list of lists containing floats ?\r\n",
"Just add a new config to cover that case. You can do this by replacing the current `features` dict with:\r\n```python\r\nfeatures=datasets.Features(\r\n {\r\n \"predictions\": datasets.Sequence(datasets.Value(\"float\")),\r\n \"references\": datasets.Sequence(datasets.Value(\"float\")),\r\n }\r\n if self.config_name == \"multioutput\"\r\n else {\r\n \"predictions\": datasets.Value(\"float\"),\r\n \"references\": datasets.Value(\"float\"),\r\n }\r\n),\r\n```\r\nFeel free to suggest a better name for the config than `multioutput`",
"Also, could you please move the changes to a new branch and open a PR from there (for the 3rd time 😄) because the diff shows changes from unrelated PRs (maybe due to rebasing?).",
"Thanks for the input, I have added new config to support multi-dimensional lists and updated the examples as well.\r\n\r\nSure. Will do that and open a new PR for these changes."
] | "2022-03-07T17:53:24Z" | "2022-03-09T16:50:03Z" | "2022-03-09T16:50:03Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3845.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3845",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3845.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3845"
} | This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API.
Both implementations are based on usage of sciket-learn.
Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608)
Please suggest any changes if required. Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3845/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3845/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1083/comments | https://api.github.com/repos/huggingface/datasets/issues/1083/events | https://github.com/huggingface/datasets/pull/1083 | 756,687,101 | MDExOlB1bGxSZXF1ZXN0NTMyMTk2Nzc0 | 1,083 | Add the multilingual Exams dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yjernite",
"id": 10469459,
"login": "yjernite",
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"repos_url": "https://api.github.com/users/yjernite/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yjernite"
} | [] | closed | false | null | [] | null | [
"Will slim down the dummy files in the morning"
] | "2020-12-04T00:06:04Z" | "2020-12-04T17:12:00Z" | "2020-12-04T17:12:00Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1083.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1083",
"merged_at": "2020-12-04T17:12:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1083.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1083"
} | https://github.com/mhardalov/exams-qa
`multilingual` configs have all languages mixed together
`crosslingual` mixes the languages for test but separates them for train and dec, so I've made one config per language for train/dev data and one config with the joint test set | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1083/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1083/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5993/comments | https://api.github.com/repos/huggingface/datasets/issues/5993/events | https://github.com/huggingface/datasets/issues/5993 | 1,776,643,555 | I_kwDODunzps5p5W3j | 5,993 | ValueError: Table schema does not match schema used to create file | {
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"events_url": "https://api.github.com/users/exs-avianello/events{/privacy}",
"followers_url": "https://api.github.com/users/exs-avianello/followers",
"following_url": "https://api.github.com/users/exs-avianello/following{/other_user}",
"gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/exs-avianello",
"id": 128361578,
"login": "exs-avianello",
"node_id": "U_kgDOB6akag",
"organizations_url": "https://api.github.com/users/exs-avianello/orgs",
"received_events_url": "https://api.github.com/users/exs-avianello/received_events",
"repos_url": "https://api.github.com/users/exs-avianello/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions",
"type": "User",
"url": "https://api.github.com/users/exs-avianello"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
}
] | null | [
"We'll do a new release of `datasets` soon to make the fix available :)\r\n\r\nIn the meantime you can use `datasets` from source (main)",
"Thank you very much @lhoestq ! 🚀 "
] | "2023-06-27T10:54:07Z" | "2023-06-27T15:36:42Z" | "2023-06-27T15:32:44Z" | NONE | null | null | null | ### Describe the bug
Saving a dataset as parquet fails with a `ValueError: Table schema does not match schema used to create file` if the dataset was obtained out of a `.select_columns()` call with columns selected out of order.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict(
{
"x1": [1, 2, 3],
"x2": [10, 11, 12],
}
)
ds = dataset.select_columns(["x2", "x1"])
ds.to_parquet("demo.parquet")
```
```shell
>>>
ValueError: Table schema does not match schema used to create file:
table:
x2: int64
x1: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53 vs.
file:
x1: int64
x2: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53
```
---
I think this is because after the `.select_columns()` call with out of order columns, the output dataset features' schema ends up being out of sync with the schema of the arrow table backing it.
```python
ds.features.arrow_schema
>>>
x1: int64
x2: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53
ds.data.schema
>>>
x2: int64
x1: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53
```
So when we call `.to_parquet()`, the call behind the scenes to `datasets.io.parquet.ParquetDatasetWriter(...).write()` which initialises the backend `pyarrow.parquet.ParquetWriter` with `schema = self.dataset.features.arrow_schema` triggers `pyarrow` on write when [it checks](https://github.com/apache/arrow/blob/11b140a734a516e436adaddaeb35d23f30dcce44/python/pyarrow/parquet/core.py#L1086-L1090) that the `ParquetWriter` schema matches the schema of the table being written 🙌
https://github.com/huggingface/datasets/blob/6ed837325cb539a5deb99129e5ad181d0269e050/src/datasets/io/parquet.py#L139-L141
### Expected behavior
The dataset gets successfully saved as parquet.
*In the same way as it does if saving it as csv:
```python
import datasets
dataset = datasets.Dataset.from_dict(
{
"x1": [1, 2, 3],
"x2": [10, 11, 12],
}
)
ds = dataset.select_columns(["x2", "x1"])
ds.to_csv("demo.csv")
```
### Environment info
`python==3.11`
`datasets==2.13.1`
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5993/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5993/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4528/comments | https://api.github.com/repos/huggingface/datasets/issues/4528/events | https://github.com/huggingface/datasets/issues/4528 | 1,276,679,155 | I_kwDODunzps5MGJPz | 4,528 | Memory leak when iterating a Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4",
"events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}",
"followers_url": "https://api.github.com/users/NouamaneTazi/followers",
"following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}",
"gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NouamaneTazi",
"id": 29777165,
"login": "NouamaneTazi",
"node_id": "MDQ6VXNlcjI5Nzc3MTY1",
"organizations_url": "https://api.github.com/users/NouamaneTazi/orgs",
"received_events_url": "https://api.github.com/users/NouamaneTazi/received_events",
"repos_url": "https://api.github.com/users/NouamaneTazi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NouamaneTazi"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Is someone assigned to this issue?",
"The same issue is being debugged here: https://github.com/huggingface/datasets/issues/4883\r\n",
"Here is a modified repro example that makes it easier to see the leak:\r\n\r\n```\r\n$ cat ds2.py\r\nimport gc, sys\r\nimport time\r\nfrom datasets import load_dataset\r\nimport os, psutil\r\n\r\nprocess = psutil.Process(os.getpid())\r\n\r\nprint(process.memory_info().rss/2**20)\r\n\r\ncorpus = load_dataset(\"BeIR/msmarco\", 'corpus', keep_in_memory=False, streaming=False)['corpus']\r\ncorpus = corpus.select(range(200000))\r\n\r\nprint(process.memory_info().rss/2**20)\r\n\r\nbatch = None\r\n\r\nmem_before_start = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n\r\nstep = 20000\r\nfor i in range(0, 10*step, step):\r\n mem_before = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n batch = corpus[i:i+step]\r\n import objgraph\r\n #objgraph.show_refs([batch])\r\n #objgraph.show_refs([corpus])\r\n #sys.exit()\r\n gc.collect()\r\n\r\n mem_after = psutil.Process(os.getpid()).memory_info().rss / 2**20\r\n print(f\"{i:6d} {mem_after - mem_before:12.4f} {mem_after - mem_before_start:12.4f}\")\r\n\r\n```\r\n\r\nLet's run:\r\n\r\n```\r\n$ python ds2.py\r\n 0 36.5391 36.5391\r\n 20000 10.4609 47.0000\r\n 40000 5.9766 52.9766\r\n 60000 7.8906 60.8672\r\n 80000 6.0586 66.9258\r\n100000 8.4453 75.3711\r\n120000 6.7422 82.1133\r\n140000 8.5664 90.6797\r\n160000 5.7344 96.4141\r\n180000 8.3398 104.7539\r\n```\r\n\r\nYou can see the last column of total RSS memory keeps on growing in MBs. The mid column is by how much it was grown during a single iteration of the repro script (20000 items)",
"@NouamaneTazi, please check my analysis here https://github.com/huggingface/datasets/issues/4883#issuecomment-1242599722 so if you agree with my research this Issue can be closed as well.\r\n\r\nI also made a suggestion at how to proceed to hunt for a real leak here https://github.com/huggingface/datasets/issues/4883#issuecomment-1242600626\r\n\r\nyou may find this one to be useful as well https://github.com/huggingface/datasets/issues/4883#issuecomment-1242597966",
"Amazing job! Thanks for taking time to debug this 🤗\r\n\r\nFor my side, I tried to do some more research as well, but to no avail. https://github.com/huggingface/datasets/issues/4883#issuecomment-1243415957"
] | "2022-06-20T10:03:14Z" | "2022-09-12T08:51:39Z" | "2022-09-12T08:51:39Z" | MEMBER | null | null | null | e## Describe the bug
It seems that memory never gets freed after iterating a `Dataset` (using `.map()` or a simple `for` loop)
## Steps to reproduce the bug
```python
import gc
import logging
import time
import pyarrow
from datasets import load_dataset
from tqdm import trange
import os, psutil
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
process = psutil.Process(os.getpid())
print(process.memory_info().rss) # output: 633507840 bytes
corpus = load_dataset("BeIR/msmarco", 'corpus', keep_in_memory=False, streaming=False)['corpus'] # or "BeIR/trec-covid" for a smaller dataset
print(process.memory_info().rss) # output: 698601472 bytes
logger.info("Applying method to all examples in all splits")
for i in trange(0, len(corpus), 1000):
batch = corpus[i:i+1000]
data = pyarrow.total_allocated_bytes()
if data > 0:
logger.info(f"{i}/{len(corpus)}: {data}")
print(process.memory_info().rss) # output: 3788247040 bytes
del batch
gc.collect()
print(process.memory_info().rss) # output: 3788247040 bytes
logger.info("Done...")
time.sleep(100)
```
## Expected results
Limited memory usage, and memory to be freed after processing
## Actual results
Memory leak
![test](https://user-images.githubusercontent.com/29777165/174578276-f2c37e6c-b5d8-4985-b4d8-8413eb2b3241.png)
You can see how the memory allocation keeps increasing until it reaches a steady state when we hit the `time.sleep(100)`, which showcases that even the garbage collector couldn't free the allocated memory
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4528/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4528/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1904/comments | https://api.github.com/repos/huggingface/datasets/issues/1904/events | https://github.com/huggingface/datasets/pull/1904 | 811,260,904 | MDExOlB1bGxSZXF1ZXN0NTc1ODE4MjA0 | 1,904 | Fix to_pandas for boolean ArrayXD | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Thanks!"
] | "2021-02-18T16:30:46Z" | "2021-02-18T17:10:03Z" | "2021-02-18T17:10:01Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1904.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1904",
"merged_at": "2021-02-18T17:10:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1904.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1904"
} | As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`.
zero copy is available for all primitive types except booleans
see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy
and https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22
cc @SBrandeis | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1904/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1904/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/114/comments | https://api.github.com/repos/huggingface/datasets/issues/114/events | https://github.com/huggingface/datasets/issues/114 | 618,611,310 | MDU6SXNzdWU2MTg2MTEzMTA= | 114 | Couldn't reach CNN/DM dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [] | closed | false | null | [] | null | [
"Installing from source (instead of Pypi package) solved the problem."
] | "2020-05-15T00:16:17Z" | "2020-05-15T00:19:52Z" | "2020-05-15T00:19:51Z" | NONE | null | null | null | I can't get CNN / DailyMail dataset.
```python
import nlp
assert "cnn_dailymail" in [dataset.id for dataset in nlp.list_datasets()]
cnn_dm = nlp.load_dataset('cnn_dailymail')
```
[Colab notebook](https://colab.research.google.com/drive/1zQ3bYAVzm1h0mw0yWPqKAg_4EUlSx5Ex?usp=sharing)
gives following error :
```
ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/cnn_dailymail/cnn_dailymail.py
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/114/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/114/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1055/comments | https://api.github.com/repos/huggingface/datasets/issues/1055/events | https://github.com/huggingface/datasets/pull/1055 | 756,298,372 | MDExOlB1bGxSZXF1ZXN0NTMxODY1NjM4 | 1,055 | Add hebrew-sentiment | {
"avatar_url": "https://avatars.githubusercontent.com/u/23455264?v=4",
"events_url": "https://api.github.com/users/elronbandel/events{/privacy}",
"followers_url": "https://api.github.com/users/elronbandel/followers",
"following_url": "https://api.github.com/users/elronbandel/following{/other_user}",
"gists_url": "https://api.github.com/users/elronbandel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/elronbandel",
"id": 23455264,
"login": "elronbandel",
"node_id": "MDQ6VXNlcjIzNDU1MjY0",
"organizations_url": "https://api.github.com/users/elronbandel/orgs",
"received_events_url": "https://api.github.com/users/elronbandel/received_events",
"repos_url": "https://api.github.com/users/elronbandel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/elronbandel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elronbandel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/elronbandel"
} | [] | closed | false | null | [] | null | [
"@elronbandel it looks like something went wrong with the renaming, as the old files are still in the PR. Can you `git rm datasets/hebrew-sentiment` ?",
"merging since the CI is fixed on master",
"This is the old version of the data.\r\nHere is the fixed version.\r\nhttps://github.com/OnlpLab/Hebrew-Sentiment-Data\r\n\r\nI hope I would find time to open a PR. I think it supposed to be only to change the data path ",
"Cool ! Sure feel free to open a PR if you have some time :) and feel free to ping me for review or if you have questions"
] | "2020-12-03T15:24:31Z" | "2022-02-21T15:26:05Z" | "2020-12-04T11:24:16Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1055.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1055",
"merged_at": "2020-12-04T11:24:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1055.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1055"
} | hebrew-sentiment dataset is ready! (including tests, tags etc) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1055/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1055/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3512/comments | https://api.github.com/repos/huggingface/datasets/issues/3512/events | https://github.com/huggingface/datasets/issues/3512 | 1,092,359,973 | I_kwDODunzps5BHBcl | 3,512 | No Data format found | {
"avatar_url": "https://avatars.githubusercontent.com/u/57741378?v=4",
"events_url": "https://api.github.com/users/shazzad47/events{/privacy}",
"followers_url": "https://api.github.com/users/shazzad47/followers",
"following_url": "https://api.github.com/users/shazzad47/following{/other_user}",
"gists_url": "https://api.github.com/users/shazzad47/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shazzad47",
"id": 57741378,
"login": "shazzad47",
"node_id": "MDQ6VXNlcjU3NzQxMzc4",
"organizations_url": "https://api.github.com/users/shazzad47/orgs",
"received_events_url": "https://api.github.com/users/shazzad47/received_events",
"repos_url": "https://api.github.com/users/shazzad47/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shazzad47/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shazzad47/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shazzad47"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | [] | null | [
"Hi, which dataset is giving you an error?"
] | "2022-01-03T09:41:11Z" | "2022-01-17T13:26:05Z" | "2022-01-17T13:26:05Z" | NONE | null | null | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3512/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3512/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6345/comments | https://api.github.com/repos/huggingface/datasets/issues/6345/events | https://github.com/huggingface/datasets/issues/6345 | 1,957,707,870 | I_kwDODunzps50sEBe | 6,345 | support squad structure datasets using a YAML parameter | {
"avatar_url": "https://avatars.githubusercontent.com/u/138524319?v=4",
"events_url": "https://api.github.com/users/MajdTannous1/events{/privacy}",
"followers_url": "https://api.github.com/users/MajdTannous1/followers",
"following_url": "https://api.github.com/users/MajdTannous1/following{/other_user}",
"gists_url": "https://api.github.com/users/MajdTannous1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MajdTannous1",
"id": 138524319,
"login": "MajdTannous1",
"node_id": "U_kgDOCEG2nw",
"organizations_url": "https://api.github.com/users/MajdTannous1/orgs",
"received_events_url": "https://api.github.com/users/MajdTannous1/received_events",
"repos_url": "https://api.github.com/users/MajdTannous1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MajdTannous1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MajdTannous1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MajdTannous1"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | "2023-10-23T17:55:37Z" | "2023-10-23T17:55:37Z" | null | NONE | null | null | null | ### Feature request
Since the squad structure is widely used, I think it could be beneficial to support it using a YAML parameter.
could you implement automatic data loading of squad-like data using squad JSON format, to read it from JSON files and view it in the correct squad structure.
The dataset structure should be like this:
https://huggingface.co/datasets/squad
Columns:id,title,context,question,answers
### Motivation
Dataset repo requires arbitrary Python code execution
### Your contribution
The dataset structure should be like this:
https://huggingface.co/datasets/squad
Columns:id,title,context,question,answers
train and dev sets in squad structure JSON files | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6345/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6345/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3383/comments | https://api.github.com/repos/huggingface/datasets/issues/3383/events | https://github.com/huggingface/datasets/pull/3383 | 1,071,551,884 | PR_kwDODunzps4vaFpm | 3,383 | add Georgian data in cc100. | {
"avatar_url": "https://avatars.githubusercontent.com/u/55232459?v=4",
"events_url": "https://api.github.com/users/AnzorGozalishvili/events{/privacy}",
"followers_url": "https://api.github.com/users/AnzorGozalishvili/followers",
"following_url": "https://api.github.com/users/AnzorGozalishvili/following{/other_user}",
"gists_url": "https://api.github.com/users/AnzorGozalishvili/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AnzorGozalishvili",
"id": 55232459,
"login": "AnzorGozalishvili",
"node_id": "MDQ6VXNlcjU1MjMyNDU5",
"organizations_url": "https://api.github.com/users/AnzorGozalishvili/orgs",
"received_events_url": "https://api.github.com/users/AnzorGozalishvili/received_events",
"repos_url": "https://api.github.com/users/AnzorGozalishvili/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AnzorGozalishvili/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AnzorGozalishvili/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AnzorGozalishvili"
} | [] | closed | false | null | [] | null | [] | "2021-12-05T20:38:09Z" | "2021-12-14T14:37:23Z" | "2021-12-14T14:37:22Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3383.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3383",
"merged_at": "2021-12-14T14:37:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3383.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3383"
} | update cc100 dataset to support loading Georgian (ka) data which is originally available in CC100 dataset source.
All tests are passed.
Dummy data generated.
metadata generated. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3383/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3383/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/246/comments | https://api.github.com/repos/huggingface/datasets/issues/246/events | https://github.com/huggingface/datasets/issues/246 | 632,380,054 | MDU6SXNzdWU2MzIzODAwNTQ= | 246 | What is the best way to cache a dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/112599?v=4",
"events_url": "https://api.github.com/users/Mistobaan/events{/privacy}",
"followers_url": "https://api.github.com/users/Mistobaan/followers",
"following_url": "https://api.github.com/users/Mistobaan/following{/other_user}",
"gists_url": "https://api.github.com/users/Mistobaan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mistobaan",
"id": 112599,
"login": "Mistobaan",
"node_id": "MDQ6VXNlcjExMjU5OQ==",
"organizations_url": "https://api.github.com/users/Mistobaan/orgs",
"received_events_url": "https://api.github.com/users/Mistobaan/received_events",
"repos_url": "https://api.github.com/users/Mistobaan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mistobaan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mistobaan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mistobaan"
} | [] | closed | false | null | [] | null | [
"Everything is already cached by default in 🤗nlp (in particular dataset\nloading and all the “map()” operations) so I don’t think you need to do any\nspecific caching in streamlit.\n\nTell us if you feel like it’s not the case.\n\nOn Sat, 6 Jun 2020 at 13:02, Fabrizio Milo <[email protected]> wrote:\n\n> For example if I want to use streamlit with a nlp dataset:\n>\n> @st.cache\n> def load_data():\n> return nlp.load_dataset('squad')\n>\n> This code raises the error \"uncachable object\"\n>\n> Right now I just fixed with a constant for my specific case:\n>\n> @st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0})\n>\n> But I was curious to know what is the best way in general\n>\n> —\n> You are receiving this because you are subscribed to this thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/nlp/issues/246>, or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABYDIHKAKO7CWGX2QY55UXLRVIO3ZANCNFSM4NV333RQ>\n> .\n>\n",
"Closing this one. Feel free to re-open if you have other questions !"
] | "2020-06-06T11:02:07Z" | "2020-07-09T09:15:07Z" | "2020-07-09T09:15:07Z" | NONE | null | null | null | For example if I want to use streamlit with a nlp dataset:
```
@st.cache
def load_data():
return nlp.load_dataset('squad')
```
This code raises the error "uncachable object"
Right now I just fixed with a constant for my specific case:
```
@st.cache(hash_funcs={pyarrow.lib.Buffer: lambda b: 0})
```
But I was curious to know what is the best way in general
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/246/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/246/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2661/comments | https://api.github.com/repos/huggingface/datasets/issues/2661/events | https://github.com/huggingface/datasets/pull/2661 | 946,446,967 | MDExOlB1bGxSZXF1ZXN0NjkxNjE5MzAz | 2,661 | Add SD task for SUPERB | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"I make a summary about our discussion with @lewtun and @Narsil on the agreed schema for this dataset and the additional steps required to generate the 2D array labels:\r\n- The labels for this dataset are a 2D array:\r\n Given an example:\r\n ```python\r\n {\"record_id\": record_id, \"file\": file, \"start\": start, \"end\": end, \"speakers\": [...]}\r\n ```\r\n the labels are a 2D array of shape `(num_frames, num_speakers)` where `num_frames = end - start` and `num_speakers = 2`.\r\n- In order to avoid a too large dataset (too large disk space), `datasets` does not store the 2D array label. Instead, we store a compact form:\r\n ```\r\n \"speakers\": [\r\n {\"speaker_id\": speaker_0_id, \"start\": start_0_speaker_0, \"end\": end_0_speaker_0},\r\n {\"speaker_id\": speaker_0_id, \"start\": start_1_speaker_0, \"end\": end_1_speaker_0},\r\n {\"speaker_id\": speaker_1_id, \"start\": start_0_speaker_1, \"end\": end_0_speaker_1},\r\n ],\r\n ```\r\n - Once loaded the dataset, an additional step is required to generate the 2D array label from this compact form\r\n - This additional step should be a modified version of the s3prl method `_get_labeled_speech`:\r\n - Original s3prl `_get_labeled_speech` includes 2 functionalities: reading the audio file and transforming it into an array, and generating the label 2D array; I think we should separate these 2 functionalities\r\n - Original s3prl `_get_labeled_speech` performs 2 steps to generate the labels:\r\n - Transform start/end seconds (float) into frame numbers (int): I have already done this step to generate the dataset\r\n - Generate the 2D array label from the frame numbers\r\n\r\nI also ping @osanseviero and @lhoestq to include them in the loop.",
"Here I would like to discuss (and agree) one of the decisions I made, as I'm not completely satisfied with it: to transform the seconds (float) into frame numbers (int) to generate this dataset.\r\n\r\n- A priori, the most natural and general choice would be to preserve the seconds (float), because:\r\n - this is the way the raw data comes from\r\n - the transformation into frame numbers depends on the sample rate, frame_shift and subsampling\r\n\r\nHowever, I finally decided to transform seconds into frame numbers because:\r\n- for SUPERB, sampling rate, frame_shift and subsampling are fixed (`rate = 16_000`, `frame_shift = 160`, `subsampling = 1`)\r\n- it makes easier the post-processing, as labels are generated from sample numbers: labels are a 2D array of shape `(num_frames, num_speakers)`\r\n- the number of examples depends on the number of frames:\r\n - if an example has more than 2_000 frames, then it is split into 2 examples. This is the case for `record_id = \"7859-102521-0017_3983-5371-0014\"`, which has 2_452 frames and it is split into 2 examples:\r\n ```\r\n {\"record_id\": \"7859-102521-0017_3983-5371-0014\", \"start\"= 0, \"end\": 2_000,...},\r\n {\"record_id\": \"7859-102521-0017_3983-5371-0014\", \"start\"= 2_000, \"end\": 2_452,...},\r\n ```\r\n\r\nAs I told you, I'm not totally convinced of this decision, and I would really appreciate your opinion.\r\n\r\ncc: @lewtun @Narsil @osanseviero @lhoestq ",
"It makes total sense to prepare the data to be in a format that can actually be used for model training and evaluation. That's one of the roles of this lib :)\r\n\r\nSo for me it's ok to use frames as a unit instead of seconds. Just pinging @patrickvonplaten in case he has ever played with such audio tasks and has some advice. For the context: the task is to classify which speaker is speaking, let us know if you are aware of any convenient/standard format for this.\r\n\r\nAlso I'm not sure why you have to split an example if it's longer that 2,000 frames ?",
"> Also I'm not sure why you have to split an example if it's longer that 2,000 frames ?\r\n\r\nIt is a convention in SUPERB benchmark.",
"Note that if we agree to leave the dataset as it is now, 2 additional custom functions must be used:\r\n- one to generate the 2D array labels\r\n- one to load the audio file into an array, but taking into account start/end to cut the audio\r\n\r\nIs there a way we can give these functions ready to be used? Or should we leave this entirely to the end user? This is not trivial...",
"You could add an example of usage in the dataset card, as it is done for other audio datasets",
"@albertvillanova this simple function can be edited simply to add the start/stop cuts \r\n\r\nhttps://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/automatic_speech_recognition.py#L29 ",
"Does this function work on windows ?",
"Windows ? What is it ? (Not sure not able to test, it's directly calling ffmpeg binary, so depending on the setup it could but can't say for sure without testing)\r\n",
"It's one of the OS we're supposed to support :P (for the better and for the worse)",
"> Note that if we agree to leave the dataset as it is now, 2 additional custom functions must be used:\r\n> \r\n> * one to generate the 2D array labels\r\n> * one to load the audio file into an array, but taking into account start/end to cut the audio\r\n> \r\n> Is there a way we can give these functions ready to be used? Or should we leave this entirely to the end user? This is not trivial...\r\n\r\n+1 on providing the necessary functions on the dataset card. aside from that, the current implementation looks great from my perspective!"
] | "2021-07-16T16:43:21Z" | "2021-08-04T17:03:53Z" | "2021-08-04T17:03:53Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2661.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2661",
"merged_at": "2021-08-04T17:03:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2661.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2661"
} | Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization).
TODO:
- [x] Generate the LibriMix corpus
- [x] Prepare the corpus for diarization
- [x] Upload these files to the superb-data repo
- [x] Transcribe the corresponding s3prl processing of these files into our superb loading script
- [x] README: tags + description sections
- ~~Add DER metric~~ (we leave the DER metric for a follow-up PR)
Related to #2619.
Close #2653.
cc: @lewtun | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2661/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2661/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4944/comments | https://api.github.com/repos/huggingface/datasets/issues/4944/events | https://github.com/huggingface/datasets/issues/4944 | 1,364,313,569 | I_kwDODunzps5RUcXh | 4,944 | larger dataset, larger GPU memory in the training phase? Is that correct? | {
"avatar_url": "https://avatars.githubusercontent.com/u/38886373?v=4",
"events_url": "https://api.github.com/users/debby1103/events{/privacy}",
"followers_url": "https://api.github.com/users/debby1103/followers",
"following_url": "https://api.github.com/users/debby1103/following{/other_user}",
"gists_url": "https://api.github.com/users/debby1103/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/debby1103",
"id": 38886373,
"login": "debby1103",
"node_id": "MDQ6VXNlcjM4ODg2Mzcz",
"organizations_url": "https://api.github.com/users/debby1103/orgs",
"received_events_url": "https://api.github.com/users/debby1103/received_events",
"repos_url": "https://api.github.com/users/debby1103/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/debby1103/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/debby1103/subscriptions",
"type": "User",
"url": "https://api.github.com/users/debby1103"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"does the trainer save it in GPU? sooo curious... how to fix it",
"It's my bad. didn't limit the input length"
] | "2022-09-07T08:46:30Z" | "2022-09-07T12:34:58Z" | "2022-09-07T12:34:58Z" | NONE | null | null | null | from datasets import set_caching_enabled
set_caching_enabled(False)
for ds_name in ["squad","newsqa","nqopen","narrativeqa"]:
train_ds = load_from_disk("../../../dall/downstream/processedproqa/{}-train.hf".format(ds_name))
break
train_ds = concatenate_datasets([train_ds,train_ds,train_ds,train_ds]) #operation 1
trainer = QuestionAnsweringTrainer( #huggingface trainer
model=model,
args=training_args,
train_dataset=train_ds,
eval_dataset= None,
eval_examples=None,
answer_column_name=answer_column,
dataset_name="squad",
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics if training_args.predict_with_generate else None,
)
with operation 1, the GPU memory increases from 16G to 23G | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4944/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4944/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/723 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/723/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/723/comments | https://api.github.com/repos/huggingface/datasets/issues/723/events | https://github.com/huggingface/datasets/issues/723 | 718,926,723 | MDU6SXNzdWU3MTg5MjY3MjM= | 723 | Adding pseudo-labels to datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sshleifer",
"id": 6045025,
"login": "sshleifer",
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"organizations_url": "https://api.github.com/users/sshleifer/orgs",
"received_events_url": "https://api.github.com/users/sshleifer/received_events",
"repos_url": "https://api.github.com/users/sshleifer/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sshleifer"
}
] | null | [
"Nice ! :)\r\nIt's indeed the first time we have such contributions so we'll have to figure out the appropriate way to integrate them.\r\nCould you add details on what they could be used for ?\r\n",
"They can be used as training data for a smaller model.",
"Sounds just like a regular dataset to me then, no?",
"A new configuration for those datasets should do the job then.\r\nNote that until now datasets like xsum only had one configuration. It means that users didn't have to specify the configuration name when loading the dataset. If we add new configs, users that update the lib will have to update their code to specify the default/standard configuration name (not the one with pseudo labels).",
"Could also be a `user-namespace` dataset maybe?",
"Oh yes why not. I'm more in favor of this actually since pseudo labels are things that users (not dataset authors in general) can compute by themselves and share with the community",
"![image](https://user-images.githubusercontent.com/6045025/96045248-b528a380-0e3f-11eb-9124-bd55afa031bb.png)\r\n\r\nI assume I should (for example) rename the xsum dir, change the URL, and put the modified dir somewhere in S3?",
"You can use the `datasets-cli` to upload the folder with your version of xsum with the pseudo labels.\r\n\r\n```\r\ndatasets-cli upload_dataset path/to/xsum\r\n```"
] | "2020-10-11T21:05:45Z" | "2021-08-03T05:11:51Z" | "2021-08-03T05:11:51Z" | CONTRIBUTOR | null | null | null | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is the right way to structure this contribution.
I read https://huggingface.co/docs/datasets/add_dataset.html, but it doesn't really cover this type of contribution.
I could, for example, make a new directory, `xsum_bart_pseudolabels` for each set of pseudolabels or add some sort of parametrization to `xsum.py`: https://github.com/huggingface/datasets/blob/5f4c6e830f603830117877b8990a0e65a2386aa6/datasets/xsum/xsum.py
What do you think @lhoestq ?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/723/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/723/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1437/comments | https://api.github.com/repos/huggingface/datasets/issues/1437/events | https://github.com/huggingface/datasets/pull/1437 | 760,891,879 | MDExOlB1bGxSZXF1ZXN0NTM1NjQwODE0 | 1,437 | Add Indosum dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/11614678?v=4",
"events_url": "https://api.github.com/users/prasastoadi/events{/privacy}",
"followers_url": "https://api.github.com/users/prasastoadi/followers",
"following_url": "https://api.github.com/users/prasastoadi/following{/other_user}",
"gists_url": "https://api.github.com/users/prasastoadi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/prasastoadi",
"id": 11614678,
"login": "prasastoadi",
"node_id": "MDQ6VXNlcjExNjE0Njc4",
"organizations_url": "https://api.github.com/users/prasastoadi/orgs",
"received_events_url": "https://api.github.com/users/prasastoadi/received_events",
"repos_url": "https://api.github.com/users/prasastoadi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/prasastoadi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/prasastoadi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/prasastoadi"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"Hi @prasastoadi have you had a chance to take a look at my suggestions ?\r\n\r\nFeel free to ping ;e if you have questions or when you're ready for a review",
"Thanks for your contribution, @prasastoadi. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] | "2020-12-10T05:02:00Z" | "2022-10-03T09:38:54Z" | "2022-10-03T09:38:54Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1437.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1437",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1437.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1437"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1437/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1437/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4336/comments | https://api.github.com/repos/huggingface/datasets/issues/4336/events | https://github.com/huggingface/datasets/pull/4336 | 1,234,446,174 | PR_kwDODunzps43vpqG | 4,336 | Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment | {
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor"
} | [] | closed | false | null | [] | null | [
"Summary of CircleCI errors:\r\n- **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty.\r\n- **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n- **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n",
"The CI errors about missing content in the dataset cards can be ignored in this PR btw",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4336). All of your documentation changes will be reflected on that endpoint."
] | "2022-05-12T20:24:45Z" | "2022-05-16T16:25:00Z" | "2022-05-16T16:24:59Z" | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4336.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4336",
"merged_at": "2022-05-16T16:24:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4336.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4336"
} | Adding evaluation metadata for :
- Health Fact
- Jigsaw Toxicity
- LIAR
- LJ Speech
- MSRA NER
- Multi News
- NCBI Diseas
- Poem Sentiment | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4336/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4336/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/722/comments | https://api.github.com/repos/huggingface/datasets/issues/722/events | https://github.com/huggingface/datasets/pull/722 | 718,689,117 | MDExOlB1bGxSZXF1ZXN0NTAxMDI3NjAw | 722 | datasets(RWTH-PHOENIX-Weather 2014 T): add initial loading script | {
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AmitMY",
"id": 5757359,
"login": "AmitMY",
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AmitMY"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"This might be interesting to @kayoyin the author of https://github.com/kayoyin/transformer-slt – pinging you just in case :)",
"Thanks Amit, this is a great idea! I'm thinking of porting the SLT models from my paper here as well, having this dataset would be perfect for that :)",
"Thanks for your contribution, @AmitMY. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] | "2020-10-10T19:44:08Z" | "2022-09-30T14:53:37Z" | "2022-09-30T14:53:37Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/722.diff",
"html_url": "https://github.com/huggingface/datasets/pull/722",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/722.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/722"
} | This is the first sign language dataset in this repo as far as I know.
Following an old issue I opened https://github.com/huggingface/datasets/issues/302.
I added the dataset official REAMDE file, but I see it's not very standard, so it can be removed.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/722/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/722/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1580/comments | https://api.github.com/repos/huggingface/datasets/issues/1580/events | https://github.com/huggingface/datasets/pull/1580 | 768,111,377 | MDExOlB1bGxSZXF1ZXN0NTQwNjQxNDQ3 | 1,580 | made suggested changes in diplomacy_detection.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/15351802?v=4",
"events_url": "https://api.github.com/users/MisbahKhan789/events{/privacy}",
"followers_url": "https://api.github.com/users/MisbahKhan789/followers",
"following_url": "https://api.github.com/users/MisbahKhan789/following{/other_user}",
"gists_url": "https://api.github.com/users/MisbahKhan789/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MisbahKhan789",
"id": 15351802,
"login": "MisbahKhan789",
"node_id": "MDQ6VXNlcjE1MzUxODAy",
"organizations_url": "https://api.github.com/users/MisbahKhan789/orgs",
"received_events_url": "https://api.github.com/users/MisbahKhan789/received_events",
"repos_url": "https://api.github.com/users/MisbahKhan789/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MisbahKhan789/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MisbahKhan789/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MisbahKhan789"
} | [] | closed | false | null | [] | null | [] | "2020-12-15T19:52:00Z" | "2020-12-16T10:27:52Z" | "2020-12-16T10:27:52Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1580.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1580",
"merged_at": "2020-12-16T10:27:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1580.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1580"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1580/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1580/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/4525 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4525/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4525/comments | https://api.github.com/repos/huggingface/datasets/issues/4525/events | https://github.com/huggingface/datasets/issues/4525 | 1,276,491,386 | I_kwDODunzps5MFbZ6 | 4,525 | Out of memory error on workers while running Beam+Dataflow | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Some naive ideas to cope with this:\r\n- enable more RAM on each worker\r\n- force the spanning of more workers\r\n- others?",
"@albertvillanova We were finally able to process the full NQ dataset on our machines using 600 gb with 5 workers. Maybe these numbers will work for you as well.",
"Thanks a lot for the hint, @seirasto.\r\n\r\nI have one question: what runner did you use? Direct, Apache Flink/Nemo/Samza/Spark, Google Dataflow...? Thank you.",
"I asked my colleague who ran the code and he said apache beam.",
"@albertvillanova Since we have already processed the NQ dataset on our machines can we upload it to datasets so the NQ PR can be merged?",
"Maybe @lhoestq can give a more accurate answer as I am not sure about the authentication requirements to upload those files to our cloud bucket.\r\n\r\nAnyway I propose to continue this discussion on the dedicated PR for Natural questions dataset:\r\n- #4368",
"> I asked my colleague who ran the code and he said apache beam.\r\n\r\nHe looked into it further and he just used DirectRunner. @albertvillanova ",
"OK, thank you @seirasto for your hint.\r\n\r\nThat explains why you did not encounter the out of memory error: this only appears when the processing is distributed (on workers memory) and DirectRunner does not distribute the processing (all is done in a single machine). "
] | "2022-06-20T07:28:12Z" | "2022-06-30T09:33:57Z" | null | MEMBER | null | null | null | ## Describe the bug
While running the preprocessing of the natural_question dataset (see PR #4368), there is an issue for the "default" config (train+dev files).
Previously we ran the preprocessing for the "dev" config (only dev files) with success.
Train data files are larger than dev ones and apparently workers run out of memory while processing them.
Any help/hint is welcome!
Error message:
```
Data channel closed, unable to receive additional data from SDK sdk-0-0
```
Info from the Diagnostics tab:
```
Out of memory: Killed process 1882 (python) total-vm:6041764kB, anon-rss:3290928kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:9520kB oom_score_adj:900
The worker VM had to shut down one or more processes due to lack of memory.
```
## Additional information
### Stack trace
```
Traceback (most recent call last):
File "/home/albert_huggingface_co/natural_questions/venv/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/commands/run_beam.py", line 127, in run
builder.download_and_prepare(
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/builder.py", line 1389, in _download_and_prepare
pipeline_results.wait_until_finish()
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/apache_beam/runners/dataflow/dataflow_runner.py", line 1667, in wait_until_finish
raise DataflowRuntimeException(
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error:
Data channel closed, unable to receive additional data from SDK sdk-0-0
```
### Logs
```
Error message from worker: Data channel closed, unable to receive additional data from SDK sdk-0-0
Workflow failed. Causes: S30:train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/GroupByKey/Read+train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/GroupByKey/GroupByWindow+train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/FlatMap(restore_timestamps)+train/ReadAllFromText/ReadAllFiles/Reshard/RemoveRandomKeys+train/ReadAllFromText/ReadAllFiles/ReadRange+train/Map(_parse_example)+train/Encode+train/Count N. Examples+train/Get values/Values+train/Save to parquet/Write/WriteImpl/WindowInto(WindowIntoFn)+train/Save to parquet/Write/WriteImpl/WriteBundles+train/Save to parquet/Write/WriteImpl/Pair+train/Save to parquet/Write/WriteImpl/GroupByKey/Write failed., The job failed because a work item has failed 4 times. Look in previous log entries for the cause of each one of the 4 failures. For more information, see https://cloud.google.com/dataflow/docs/guides/common-errors. The work item was attempted on these workers: beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: Data channel closed, unable to receive additional data from SDK sdk-0-0, beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-bwsj Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-5052 Root cause: The worker lost contact with the service.
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4525/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4525/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1866/comments | https://api.github.com/repos/huggingface/datasets/issues/1866/events | https://github.com/huggingface/datasets/pull/1866 | 807,017,816 | MDExOlB1bGxSZXF1ZXN0NTcyMzM3NDQ1 | 1,866 | Add dataset for Financial PhraseBank | {
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"events_url": "https://api.github.com/users/frankier/events{/privacy}",
"followers_url": "https://api.github.com/users/frankier/followers",
"following_url": "https://api.github.com/users/frankier/following{/other_user}",
"gists_url": "https://api.github.com/users/frankier/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/frankier",
"id": 299380,
"login": "frankier",
"node_id": "MDQ6VXNlcjI5OTM4MA==",
"organizations_url": "https://api.github.com/users/frankier/orgs",
"received_events_url": "https://api.github.com/users/frankier/received_events",
"repos_url": "https://api.github.com/users/frankier/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankier/subscriptions",
"type": "User",
"url": "https://api.github.com/users/frankier"
} | [] | closed | false | null | [] | null | [
"Thanks for the feedback. All accepted and metadata regenerated."
] | "2021-02-12T07:30:56Z" | "2021-02-17T14:22:36Z" | "2021-02-17T14:22:36Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1866.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1866",
"merged_at": "2021-02-17T14:22:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1866.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1866"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1866/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1866/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/3830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3830/comments | https://api.github.com/repos/huggingface/datasets/issues/3830/events | https://github.com/huggingface/datasets/issues/3830 | 1,160,181,404 | I_kwDODunzps5FJvac | 3,830 | Got error when load cnn_dailymail dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/78331051?v=4",
"events_url": "https://api.github.com/users/wgong0510/events{/privacy}",
"followers_url": "https://api.github.com/users/wgong0510/followers",
"following_url": "https://api.github.com/users/wgong0510/following{/other_user}",
"gists_url": "https://api.github.com/users/wgong0510/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wgong0510",
"id": 78331051,
"login": "wgong0510",
"node_id": "MDQ6VXNlcjc4MzMxMDUx",
"organizations_url": "https://api.github.com/users/wgong0510/orgs",
"received_events_url": "https://api.github.com/users/wgong0510/received_events",
"repos_url": "https://api.github.com/users/wgong0510/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wgong0510/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wgong0510/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wgong0510"
} | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | [] | null | [
"Was able to reproduce the issue on Colab; full logs below. \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNotADirectoryError Traceback (most recent call last)\r\n[<ipython-input-2-39967739ba7f>](https://localhost:8080/#) in <module>()\r\n 1 import datasets\r\n 2 \r\n----> 3 train_data = datasets.load_dataset(\"cnn_dailymail\", \"3.0.0\", split=\"train\")\r\n\r\n5 frames\r\n[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)\r\n 1705 ignore_verifications=ignore_verifications,\r\n 1706 try_from_hf_gcs=try_from_hf_gcs,\r\n-> 1707 use_auth_token=use_auth_token,\r\n 1708 )\r\n 1709 \r\n\r\n[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 593 if not downloaded_from_gcs:\r\n 594 self._download_and_prepare(\r\n--> 595 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 596 )\r\n 597 # Sync info\r\n\r\n[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 659 split_dict = SplitDict(dataset_name=self.name)\r\n 660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 662 \r\n 663 # Checksums verification\r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _split_generators(self, dl_manager)\r\n 253 def _split_generators(self, dl_manager):\r\n 254 dl_paths = dl_manager.download_and_extract(_DL_URLS)\r\n--> 255 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN)\r\n 256 # Generate shared vocabulary\r\n 257 \r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _subset_filenames(dl_paths, split)\r\n 154 else:\r\n 155 logger.fatal(\"Unsupported split: %s\", split)\r\n--> 156 cnn = _find_files(dl_paths, \"cnn\", urls)\r\n 157 dm = _find_files(dl_paths, \"dm\", urls)\r\n 158 return cnn + dm\r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _find_files(dl_paths, publisher, url_dict)\r\n 133 else:\r\n 134 logger.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 135 files = sorted(os.listdir(top_dir))\r\n 136 \r\n 137 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n```",
"Hi @jon-tow, thanks for reporting. And hi @dynamicwebpaige, thanks for your investigation. \r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nWe are planning to make a patch release today (indeed, we were planning to do it last Friday).\r\n\r\nIn the meantime, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```\r\n\r\nCC: @lhoestq "
] | "2022-03-05T01:43:12Z" | "2022-03-07T06:53:41Z" | "2022-03-07T06:53:41Z" | NONE | null | null | null | When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below:
- windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\\cnn\\stories'
- google colab: NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
The code is to load dataset:
windows os:
```
from datasets import load_dataset
dataset = load_dataset("cnn_dailymail", "3.0.0", cache_dir="D:\\SourceCode\\DataScience\\HuggingFace\\Data")
```
google colab:
```
import datasets
train_data = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train")
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3830/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3830/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2774/comments | https://api.github.com/repos/huggingface/datasets/issues/2774/events | https://github.com/huggingface/datasets/pull/2774 | 963,932,199 | MDExOlB1bGxSZXF1ZXN0NzA2NDY2MDc0 | 2,774 | Prevent .map from using multiprocessing when loading from cache | {
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomasw21",
"id": 24695242,
"login": "thomasw21",
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomasw21"
} | [] | closed | false | null | [] | null | [
"I'm guessing tests are failling, because this was pushed before https://github.com/huggingface/datasets/pull/2779 was merged? cc @albertvillanova ",
"Hi @thomasw21, yes you are right: those failing tests were fixed with #2779.\r\n\r\nWould you mind to merge current upstream master branch and push again?\r\n```\r\ngit checkout sequential_map_when_cached\r\ngit fetch upstream master\r\ngit merge upstream/master\r\ngit push -u origin sequential_map_when_cached\r\n```",
"Thanks for working on this ! I'm sure we can figure something out ;)\r\n\r\nCurrently `map` starts a process to apply the map function on each shard. If the shard has already been processed, then the process that has been spawned loads the processed shard from the cache and returns it.\r\n\r\nI think we should be able to simply not start a process if a shard is already processed and cached.\r\nThis way:\r\n- you won't need to specify `sequential=True`\r\n- it won't create new processes if the dataset is already processed and cached\r\n- it will properly reload each processed shard that is cached\r\n\r\nTo know if we have to start a new process for a shard you can use the function `update_fingerprint` from fingerprint.py to know the expected fingerprint of the processed shard.\r\nThen, if the shard has already been processed, there will be a cache file named `cached-<new_fingerprint>.arrow` and you can load it with\r\n```\r\nDataset.from_file(path_to_cache_file, info=self.info, split=self.split)\r\n```\r\n\r\nLet me know if that makes sense !",
"Yes that makes total sense, I tried to initially do that, except the way fingerprint is handled doesn't allow to easily manipulate such a field. Typically the fingerprinting is handled in `@fingerprint_transform` which has a bunch of params that aren't quite easy to extract. Those params are used to manipulate args, kwargs in fancy ways in order to finally obtain a dictionary used for fingerprint. I could duplicate everything, but this look like a very risky thing to do. I'll take a look if I can make something work with `inspect` if I can make a very simple wrapper.\r\n\r\nA much more simpler solution I think is adding an optional `shard: Optional[int] = None` parameter. If None, we use the number of proc as the number of shards, otherwise we pass down the expected number of shards and use either sequential/multiprocessing (with arbitrary number of workers) to load the shards? This would allow the weird case where one wants a large number of shards with a limited amount of processes. Not the smartest thing to do, but it's not an absurd behaviour. Would this be acceptable?",
"@lhoestq friendly ping as I feel it's up for review.",
"The CI error is unrelated to the changes of this PR - it looks like an SSL issue with conda"
] | "2021-08-09T12:11:38Z" | "2021-09-09T10:20:28Z" | "2021-09-09T10:20:28Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2774.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2774",
"merged_at": "2021-09-09T10:20:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2774.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2774"
} | ## Context
On our setup, we use different setup to train vs proprocessing datasets. Usually we are able to obtain a high number of cpus to preprocess, which allows us to use `num_proc` however we can't use as many during training phase. Currently if we use `num_proc={whatever the preprocessing value was}` we load from cache, but we get:
```
Traceback (most recent call last):
File "lib/python3.8/site-packages/multiprocess/pool.py", line 131, in worker
put((job, i, result))
File "lib/python3.8/site-packages/multiprocess/queues.py", line 371, in put
self._writer.send_bytes(obj)
File "lib/python3.8/site-packages/multiprocess/connection.py", line 203, in send_bytes
self._send_bytes(m[offset:offset + size])
File "lib/python3.8/site-packages/multiprocess/connection.py", line 414, in _send_bytes
self._send(header + buf)
File "lib/python3.8/site-packages/multiprocess/connection.py", line 371, in _send
n = write(self._handle, buf)
BrokenPipeError: [Errno 32] Broken pipe
```
Our current guess, is that we're spawning too many processes compared to the number of cpus available, and it's running OOM. Also we're loading this in DDP setting which means that for each gpu, I need to spawn a high number of processes to match the preprocessing fingerprint.
Instead what we suggest:
- Allow loading shard sequentially, sharing the same fingerprint as the multiprocessed one, in order to leverage multiprocessing when we actually generate the cache, and remove it when loading from cache.
## Current issues
~I'm having a hard time making fingerprints match. For some reason, the multiprocessing and the sequential version generate two different hash.~
**EDIT**: Turns out multiprocessing and sequential have different `transform` value for fingerprinting (check `fingerprint_transform`) when running `_map_single`:
- sequential : `datasets.arrow_dataset.Dataset._map_single`
- multiprocessing: `datasets.arrow_dataset._map_single`
This discrepancy is caused by multiprocessing pickling the transformer function, it doesn't seem to keep the `Dataset` hierarchy. I'm still unclear on why `func.__qual_name__` isn't handled correctly in multiprocessing. But replacing `__qualname__` by `__name__` fixes the issue.
## What was done
~We try to prevent the usage of multiprocessing when loading a dataset. Instead we load all cached shards sequentially.~
I couldn't find a nice way to obtain the cached_file_name and check they all exist before deciding to use the multiprocessing flow or not. Instead I expose an optional boolean `sequential` in `map` method.
## TODO
- [x] Check that the multiprocessed version and the sequential version output the same output
- [x] Check that sequential can load multiprocessed
- [x] Check that multiprocessed can load sequential
## Test
```python
from datasets import load_dataset
from multiprocessing import Pool
import random
def process(batch, rng):
length = len(batch["text"])
return {**batch, "processed_text": [f"PROCESSED {rng.random()}" for _ in range(length)]}
dataset = load_dataset("stas/openwebtext-10k", split="train")
print(dataset.column_names)
print(type(dataset))
rng = random.Random(42)
dataset1 = dataset.map(process, batched=True, batch_size=50, num_proc=4, fn_kwargs={"rng": rng})
# This one should be loaded from cache
rng = random.Random(42)
dataset2 = dataset.map(process, batched=True, batch_size=50, num_proc=4, fn_kwargs={"rng": rng}, sequential=True)
# Just to check that the random generator was correct
print(dataset1[-1]["processed_text"])
print(dataset2[-1]["processed_text"])
```
## Other solutions
I chose to load everything sequentially, but we can probably find a way to load shards in parallel using another number of workers (essentially this would be an argument not used for fingerprinting, allowing to allow `m` shards using `n` processes, which would be very useful when same dataset have to be loaded on two different setup, and we still want to leverage cache).
Also we can use a env variable similarly to `TOKENIZERS_PARALLELISM` as this seems generally setup related (though this changes slightly if we use multiprocessing).
cc @lhoestq (since I had asked you previously on `num_proc` being used for fingerprinting). Don't know if this is acceptable. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2774/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2774/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4002/comments | https://api.github.com/repos/huggingface/datasets/issues/4002/events | https://github.com/huggingface/datasets/pull/4002 | 1,179,263,787 | PR_kwDODunzps408Cfp | 4,002 | Support streaming conll2012_ontonotesv5 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-03-24T09:49:56Z" | "2022-03-24T10:53:41Z" | "2022-03-24T10:48:47Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4002.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4002",
"merged_at": "2022-03-24T10:48:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4002.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4002"
} | Use another URL whit a single ZIP file (instead of previous one with a ZIP file inside another ZIP file). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4002/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4002/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1239 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1239/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1239/comments | https://api.github.com/repos/huggingface/datasets/issues/1239/events | https://github.com/huggingface/datasets/pull/1239 | 758,339,593 | MDExOlB1bGxSZXF1ZXN0NTMzNTI4NTU5 | 1,239 | add yelp_review_full dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/29229602?v=4",
"events_url": "https://api.github.com/users/hfawaz/events{/privacy}",
"followers_url": "https://api.github.com/users/hfawaz/followers",
"following_url": "https://api.github.com/users/hfawaz/following{/other_user}",
"gists_url": "https://api.github.com/users/hfawaz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hfawaz",
"id": 29229602,
"login": "hfawaz",
"node_id": "MDQ6VXNlcjI5MjI5NjAy",
"organizations_url": "https://api.github.com/users/hfawaz/orgs",
"received_events_url": "https://api.github.com/users/hfawaz/received_events",
"repos_url": "https://api.github.com/users/hfawaz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hfawaz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hfawaz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hfawaz"
} | [] | closed | false | null | [] | null | [
"Moved to https://github.com/huggingface/datasets/pull/1315"
] | "2020-12-07T09:35:36Z" | "2020-12-08T15:43:24Z" | "2020-12-08T15:00:50Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1239.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1239",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1239.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1239"
} | This corresponds to the Yelp-5 requested in https://github.com/huggingface/datasets/issues/353 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1239/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1239/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1470/comments | https://api.github.com/repos/huggingface/datasets/issues/1470/events | https://github.com/huggingface/datasets/pull/1470 | 761,791,065 | MDExOlB1bGxSZXF1ZXN0NTM2NDA2MjQx | 1,470 | Add wiki lingua dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7674948?v=4",
"events_url": "https://api.github.com/users/katnoria/events{/privacy}",
"followers_url": "https://api.github.com/users/katnoria/followers",
"following_url": "https://api.github.com/users/katnoria/following{/other_user}",
"gists_url": "https://api.github.com/users/katnoria/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/katnoria",
"id": 7674948,
"login": "katnoria",
"node_id": "MDQ6VXNlcjc2NzQ5NDg=",
"organizations_url": "https://api.github.com/users/katnoria/orgs",
"received_events_url": "https://api.github.com/users/katnoria/received_events",
"repos_url": "https://api.github.com/users/katnoria/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/katnoria/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/katnoria/subscriptions",
"type": "User",
"url": "https://api.github.com/users/katnoria"
} | [] | closed | false | null | [] | null | [
"it’s failing because of `RemoteDatasetTest.test_load_dataset_orange_sum`\r\nwhich i think is not the dataset you are doing a PR for. Try rebasing with:\r\n```\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit push -u -f origin your_branch\r\n```",
"> it’s failing because of `RemoteDatasetTest.test_load_dataset_orange_sum`\r\n> which i think is not the dataset you are doing a PR for. Try rebasing with:\r\n> \r\n> ```\r\n> git fetch upstream\r\n> git rebase upstream/master\r\n> git push -u -f origin your_branch\r\n> ```\r\n\r\nThanks, my branch seems to be up to date. \r\n```Current branch add-wiki-lingua-dataset is up to date.```",
"Also where do the google drive urls come from ?",
"looks like this PR includes changes about many other files than the ones for wiki_lingua.\r\n\r\nCan you create another branch and another PR ?\r\n(or you can try to fix this branch with rebase and push force if you're familiar with it)",
"Thanks for fixing the dummy data and removing the glob call :) ",
"> looks like this PR includes changes about many other files than the ones for wiki_lingua.\r\n> \r\n> Can you create another branch and another PR ?\r\n> (or you can try to fix this branch with rebase and push force if you're familiar with it)\r\n\r\nEasier to create a new branch and submit, I have submitted a new PR #1582 ",
"Closing this one in favor of #1582 "
] | "2020-12-11T02:04:18Z" | "2020-12-16T15:27:13Z" | "2020-12-16T15:27:13Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1470.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1470",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1470.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1470"
} | Hello @lhoestq ,
I am opening a fresh pull request as advised in my original PR https://github.com/huggingface/datasets/pull/1308
Thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1470/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1470/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1955/comments | https://api.github.com/repos/huggingface/datasets/issues/1955/events | https://github.com/huggingface/datasets/pull/1955 | 818,010,664 | MDExOlB1bGxSZXF1ZXN0NTgxMzk2OTA5 | 1,955 | typos + grammar | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [] | closed | false | null | [] | null | [] | "2021-02-27T20:21:43Z" | "2021-03-01T17:20:38Z" | "2021-03-01T14:43:19Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1955.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1955",
"merged_at": "2021-03-01T14:43:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1955.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1955"
} | This PR proposes a few typo + grammar fixes, and rewrites some sentences in an attempt to improve readability.
N.B. When referring to the library `datasets` in the docs it is typically used as a singular, and it definitely is a singular when written as "`datasets` library", that is "`datasets` library is ..." and not "are ...". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1955/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1955/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/375/comments | https://api.github.com/repos/huggingface/datasets/issues/375/events | https://github.com/huggingface/datasets/issues/375 | 655,023,307 | MDU6SXNzdWU2NTUwMjMzMDc= | 375 | TypeError when computing bertscore | {
"avatar_url": "https://avatars.githubusercontent.com/u/13269577?v=4",
"events_url": "https://api.github.com/users/willywsm1013/events{/privacy}",
"followers_url": "https://api.github.com/users/willywsm1013/followers",
"following_url": "https://api.github.com/users/willywsm1013/following{/other_user}",
"gists_url": "https://api.github.com/users/willywsm1013/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/willywsm1013",
"id": 13269577,
"login": "willywsm1013",
"node_id": "MDQ6VXNlcjEzMjY5NTc3",
"organizations_url": "https://api.github.com/users/willywsm1013/orgs",
"received_events_url": "https://api.github.com/users/willywsm1013/received_events",
"repos_url": "https://api.github.com/users/willywsm1013/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/willywsm1013/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/willywsm1013/subscriptions",
"type": "User",
"url": "https://api.github.com/users/willywsm1013"
} | [] | closed | false | null | [] | null | [
"I am not able to reproduce this issue on my side.\r\nCould you give us more details about the inputs you used ?\r\n\r\nI do get another error though:\r\n```\r\n~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/bert_score/utils.py in bert_cos_score_idf(model, refs, hyps, tokenizer, idf_dict, verbose, batch_size, device, all_layers)\r\n 371 return sorted(list(set(l)), key=lambda x: len(x.split(\" \")))\r\n 372 \r\n--> 373 sentences = dedup_and_sort(refs + hyps)\r\n 374 embs = []\r\n 375 iter_range = range(0, len(sentences), batch_size)\r\n\r\nValueError: operands could not be broadcast together with shapes (0,) (2,)\r\n```\r\nThat's because it gets numpy arrays as input and not lists. See #387 ",
"The other issue was fixed by #403 \r\n\r\nDo you still get this issue @willywsm1013 ?\r\n"
] | "2020-07-10T20:37:44Z" | "2022-06-01T15:15:59Z" | "2022-06-01T15:15:59Z" | NONE | null | null | null | Hi,
I installed nlp 0.3.0 via pip, and my python version is 3.7.
When I tried to compute bertscore with the code:
```
import nlp
bertscore = nlp.load_metric('bertscore')
# load hyps and refs
...
print (bertscore.compute(hyps, refs, lang='en'))
```
I got the following error.
```
Traceback (most recent call last):
File "bert_score_evaluate.py", line 16, in <module>
print (bertscore.compute(hyps, refs, lang='en'))
File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metric.py", line 200, in compute
output = self._compute(predictions=predictions, references=references, **metrics_kwargs)
File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metrics/bertscore/fb176889831bf0ce995ed197edc94b2e9a83f647a869bb8c9477dbb2d04d0f08/bertscore.py", line 105, in _compute
hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline)
TypeError: get_hash() takes 3 positional arguments but 4 were given
```
It seems like there is something wrong with get_hash() function? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/375/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/375/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/71 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/71/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/71/comments | https://api.github.com/repos/huggingface/datasets/issues/71/events | https://github.com/huggingface/datasets/pull/71 | 615,942,180 | MDExOlB1bGxSZXF1ZXN0NDE2MTUxODM4 | 71 | Fix arrow writer for big datasets using writer_batch_size | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"After a quick chat with Yacine : the 2Go test may not be sufficient actually, as I'm looking at the size of the array and not the size of the current_rows. If the test doesn't do the job I think I'll remove it and lower the batch size a bit to be sure that it never exceeds 2Go. I'll do more tests later"
] | "2020-05-11T14:45:36Z" | "2020-05-11T20:09:47Z" | "2020-05-11T20:00:38Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/71.diff",
"html_url": "https://github.com/huggingface/datasets/pull/71",
"merged_at": "2020-05-11T20:00:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/71.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/71"
} | This PR fixes Yacine's bug.
According to [this](https://github.com/apache/arrow/blob/master/docs/source/cpp/arrays.rst#size-limitations-and-recommendations), it is not recommended to have pyarrow arrays bigger than 2Go.
Therefore I set a default batch size of 100 000 examples per batch. In general it shouldn't exceed 2Go. If it does, I reduce the batch_size on the fly, and I notify the user with a warning. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/71/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/71/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4227/comments | https://api.github.com/repos/huggingface/datasets/issues/4227/events | https://github.com/huggingface/datasets/pull/4227 | 1,216,455,316 | PR_kwDODunzps420-mc | 4,227 | Add f1 metric card, update docstring in py file | {
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emibaylor",
"id": 27527747,
"login": "emibaylor",
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emibaylor"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | "2022-04-26T20:41:03Z" | "2022-05-03T12:50:23Z" | "2022-05-03T12:43:33Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4227.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4227",
"merged_at": "2022-05-03T12:43:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4227.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4227"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4227/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4227/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1543 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1543/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1543/comments | https://api.github.com/repos/huggingface/datasets/issues/1543/events | https://github.com/huggingface/datasets/pull/1543 | 765,476,196 | MDExOlB1bGxSZXF1ZXN0NTM4OTcwOTU5 | 1,543 | adding HindEncorp | {
"avatar_url": "https://avatars.githubusercontent.com/u/56379013?v=4",
"events_url": "https://api.github.com/users/rahul-art/events{/privacy}",
"followers_url": "https://api.github.com/users/rahul-art/followers",
"following_url": "https://api.github.com/users/rahul-art/following{/other_user}",
"gists_url": "https://api.github.com/users/rahul-art/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rahul-art",
"id": 56379013,
"login": "rahul-art",
"node_id": "MDQ6VXNlcjU2Mzc5MDEz",
"organizations_url": "https://api.github.com/users/rahul-art/orgs",
"received_events_url": "https://api.github.com/users/rahul-art/received_events",
"repos_url": "https://api.github.com/users/rahul-art/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rahul-art/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahul-art/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rahul-art"
} | [] | closed | false | null | [] | null | [
"@lhoestq I have created a new PR by reforking and creating a new branch ",
"@rahul-art unfortunately this didn't quite work, here's how you can try again:\r\n- `git checkout master` to go back to the main branch\r\n- `git pull upstream master` to make it up to date\r\n- `git checkout -b add_hind_encorp` to create a new branch\r\n\r\nThen add the dataset script, `README.md`, `dummy_data.zip`, and `dataset_infos.json` to the tracked files for the branch with `git add` (please add all of these files individually, NOT the whole directory as we don't want the other data files)\r\nThen after you have passed the style checks and the local tests, do:\r\n- `git commit . -m initial_commit`\r\n- `git push --set-upstream origin add_hind_encorp`\r\n\r\nThen you can go to this branch on the WebApp and open a new PR",
"@yjernite #1557 created new PR"
] | "2020-12-13T15:39:07Z" | "2020-12-13T23:35:53Z" | "2020-12-13T23:35:53Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1543.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1543",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1543.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1543"
} | adding Hindi Wikipedia corpus | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1543/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1543/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6451 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6451/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6451/comments | https://api.github.com/repos/huggingface/datasets/issues/6451/events | https://github.com/huggingface/datasets/issues/6451 | 2,010,693,912 | I_kwDODunzps532MEY | 6,451 | Unable to read "marsyas/gtzan" data | {
"avatar_url": "https://avatars.githubusercontent.com/u/32300890?v=4",
"events_url": "https://api.github.com/users/gerald-wrona/events{/privacy}",
"followers_url": "https://api.github.com/users/gerald-wrona/followers",
"following_url": "https://api.github.com/users/gerald-wrona/following{/other_user}",
"gists_url": "https://api.github.com/users/gerald-wrona/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gerald-wrona",
"id": 32300890,
"login": "gerald-wrona",
"node_id": "MDQ6VXNlcjMyMzAwODkw",
"organizations_url": "https://api.github.com/users/gerald-wrona/orgs",
"received_events_url": "https://api.github.com/users/gerald-wrona/received_events",
"repos_url": "https://api.github.com/users/gerald-wrona/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gerald-wrona/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gerald-wrona/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gerald-wrona"
} | [] | closed | false | null | [] | null | [
"Hi! We've merged a [PR](https://huggingface.co/datasets/marsyas/gtzan/discussions/1) that fixes the script's path logic on Windows.",
"I have transferred the discussion to the corresponding dataset: https://huggingface.co/datasets/marsyas/gtzan/discussions/2\r\n\r\nLet's continue there.",
"@mariosasko @albertvillanova \r\n\r\nThank you both very much for the speedy resolution :)"
] | "2023-11-25T15:13:17Z" | "2023-12-01T12:53:46Z" | "2023-11-27T09:36:25Z" | NONE | null | null | null | Hi, this is my code and the error:
```
from datasets import load_dataset
gtzan = load_dataset("marsyas/gtzan", "all")
```
[error_trace.txt](https://github.com/huggingface/datasets/files/13464397/error_trace.txt)
[audio_yml.txt](https://github.com/huggingface/datasets/files/13464410/audio_yml.txt)
Python 3.11.5
Jupyter Notebook 6.5.4
Windows 10
I'm able to download and work with other datasets, but not this one. For example, both these below work fine:
```
from datasets import load_dataset
dataset = load_dataset("facebook/voxpopuli", "pl", split="train", streaming=True)
minds = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
Thanks for your help
https://huggingface.co/datasets/marsyas/gtzan/tree/main | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6451/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6451/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/883/comments | https://api.github.com/repos/huggingface/datasets/issues/883/events | https://github.com/huggingface/datasets/issues/883 | 749,750,801 | MDU6SXNzdWU3NDk3NTA4MDE= | 883 | Downloading/caching only a part of a datasets' dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4",
"events_url": "https://api.github.com/users/SapirWeissbuch/events{/privacy}",
"followers_url": "https://api.github.com/users/SapirWeissbuch/followers",
"following_url": "https://api.github.com/users/SapirWeissbuch/following{/other_user}",
"gists_url": "https://api.github.com/users/SapirWeissbuch/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SapirWeissbuch",
"id": 44585792,
"login": "SapirWeissbuch",
"node_id": "MDQ6VXNlcjQ0NTg1Nzky",
"organizations_url": "https://api.github.com/users/SapirWeissbuch/orgs",
"received_events_url": "https://api.github.com/users/SapirWeissbuch/received_events",
"repos_url": "https://api.github.com/users/SapirWeissbuch/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SapirWeissbuch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SapirWeissbuch/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SapirWeissbuch"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | open | false | null | [] | null | [
"Not at the moment but we could likely support this feature.",
"?",
"I think it would be a very helpful feature, because sometimes one only wants to evaluate models on the dev set, and the whole training data may be many times bigger.\r\nThis makes the task impossible with limited memory resources."
] | "2020-11-24T14:25:18Z" | "2020-11-27T13:51:55Z" | null | NONE | null | null | null | Hi,
I want to use the validation data *only* (of natural question).
I don't want to have the whole dataset cached in my machine, just the dev set.
Is this possible? I can't find a way to do it in the docs.
Thank you,
Sapir | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/883/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/883/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3163/comments | https://api.github.com/repos/huggingface/datasets/issues/3163/events | https://github.com/huggingface/datasets/pull/3163 | 1,035,475,061 | PR_kwDODunzps4tpI44 | 3,163 | Add Image feature | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"Awesome, looking forward to using it :)",
"Few additional comments:\r\n* the current API doesn't meet the requirements mentioned in #3145 (e.g. image mime-type). However, this will be doable soon as we also plan to store image bytes alongside paths in arrow files (see https://github.com/huggingface/datasets/pull/3129#discussion_r738426187). Then, PIL can return the correct mime-type: \r\n ```python\r\n from PIL import Image\r\n import io\r\n\r\n mimetype = Image.open(io.BytesIO(image_bytes)).get_format_mimetype()\r\n ``` \r\n I plan to add this change in a separate PR.\r\n* currently, I'm returning an `np.ndarray` object after decoding for consistency with the Audio feature. However, the vision models from Transformers prefer an `Image` object to avoid the `Image.fromarray` call in the corresponding feature extractors (see [this warning](https://huggingface.co/transformers/master/model_doc/vit.html#transformers.ViTFeatureExtractor.__call__) in the Transformers docs) cc @NielsRogge \r\n\r\nSo I'm not entirely sure whether to return only a NumPy array, only a PIL Image, or both when decoding. The last point worries me because we shouldn't provide an API that leads to a warning in Transformers (in the docs, not in code :)). At the same time, it makes sense to preserve consistency with the Audio feature and return a NumPy array. \r\n\r\nThat's why I would appreciate your opinions on this.",
"That is a good question. Also pinging @nateraw .\r\n\r\nCurrently we only support returning numpy arrays because of numpy/tf/torch/jax formatting features that we have, and to keep things simple. See the [set_format docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.set_format) for more info",
"I don't think centering the discussion on what ViT expects is good, as the vision Transformers model are still in an experimental stage and we can adapt those depending on what you do here :-).\r\n\r\nIMO, the discussion should revolve on what a user will want to do with a vision dataset, and they will want to:\r\n- lazily decode their images\r\n- maybe apply data augmentation (for the training set)\r\n- resize to a fixed shape for batching\r\n\r\nThe libraries that provide step 2 and 3 either use PIL (thinking torchvision) or cv2 (thinking albumentations). NumPy does not have any function to resize an image or do basic data augmentation (like a rotate) so I think it shouldn't be the default format for an image dataset, PIL or cv2 (in an ideal world with the ability to switch between the two depending on what the users prefer) would be better.\r\n\r\nSide note: I will work on the vision integration in Transformers with Niels next month so please keep me in the loop for those awesome new vision features!",
"@sgugger I completely agree with you, especially after trying to convert the `run_image_classification` script from Transformers to use this feature. The current API doesn't seem intuitive there due to the torchvision transforms, which, as you say, prefer PIL over NumPy arrays. \r\n\r\nSo the default format would return `Image` (PIL) / `np.ndarray` (cv2) and `set_format(numpy/tf/pt)` would return image tensors if I understand you correctly. IMO this makes a lot more sense (and flexibility) than the current API.",
"Also, one additional library worth mentioning here is AugLy which supports image file paths and `PIL.Image.Image` objects.",
"That's so nice !\r\n\r\nAlso I couldn't help myself so I've played with it already ^^\r\nI was agreeably surprised that with minor additions I managed to even allow this, which I find very satisfactory:\r\n```python\r\nimport PIL.Image\r\nfrom datasets import Dataset\r\n\r\npath = \"docs/source/imgs/datasets_logo_name.jpg\"\r\n\r\ndataset = Dataset.from_dict({\"img\": [PIL.Image.open(path)]})\r\nprint(dataset.features)\r\n# {'img': Image(id=None)}\r\nprint(dataset[0][\"img\"])\r\n# <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1200x300 at 0x129DE4AC8>\r\n```\r\n\r\nLet me know if that's a behavior you'd also like to see \r\n\r\nEDIT: just pushed my changes on a branch, you can see the diff [here](https://github.com/mariosasko/datasets-1/compare/add-image-feature...huggingface:image-type-inference) if you want",
"Thanks, @lhoestq! I like your change. Very elegant indeed.\r\n\r\nP.S. I have to write a big comment that explains all the changes/things left to consider. Will do that in the next few days!",
"I'm marking this PR as ready for review.\r\n\r\nThanks to @sgugger's comment, the API is much more flexible now as it decodes images (lazily) as `PIL.Image.Image` objects and supports transforms directly on them.\r\n\r\nAlso, we no longer return paths explicitly (previously, we would return `{\"path\": image_path, \"image\": pil_image}`) for the following reasons:\r\n* what to return when reading an image from an URL or a NumPy array. We could set `path` to `None` in these situations, but IMO we should avoid redundant information.\r\n* returning a dict doesn't match nicely with the requirement of supporting image modifications - what to do if the user modifies both the image path and the image\r\n\r\n(Btw, for the images stored locally, you can access their paths with `dset[idx][\"image\"].filename`, or by avoiding decoding with `paths = [ex[\"path\"] for ex in dset]`. @lhoestq @albertvillanova WDYT about having an option to skip decoding for complex features, e. g. `Audio(decode=False)`? This way, the user can easily access the underlying data.)\r\n\r\nExamples of what you can do:\r\n```python\r\n# load local images\r\ndset = Dataset.from_dict(\"image\": [local_image_path], features=Features({\"images\": Image()}))\r\n# load remote images (we got this for free by adding support for streaming)\r\ndset = Dataset.from_dict(\"image\": [image_url], features=Features({\"images\": Image()}))\r\n# from np.ndarray\r\ndset = Dataset.from_dict({\"image\": [np.array(...)]}, features=Features({\"images\": Image()}))\r\n# cast column\r\ndset = Dataset.from_dict({\"image\": [local_image_path]})\r\ndset.cast_column(\"image\", Image())\r\n\r\n# automatic type inference\r\ndset = Dataset.from_dict({\"image\": [PIL.Image.open(local_image_path)]})\r\n\r\n# transforms\r\ndef img_transform(example):\r\n ...\r\n example[\"image\"] = transformed_pil_image_or_np_ndarray\r\n return example\r\ndset.map(img_trnasform)\r\n\r\n# transform that adds a new column with images (automatic inference of the feature type)\r\ndset.map(lambda ex: {\"image_resized\": ex[\"image\"].resize((100, 100))})\r\nprint(dset.features[\"image_resized\"]) # will print Image()\r\n```\r\n\r\nSome more cool features:\r\n* We store the image filename (`pil_image.filename`) whenever possible to avoid costly conversion to bytes\r\n* if possible, we use native compression when encoding images. Otherwise, we fall back to the lossless PNG format (e.g. after image ops or when storing NumPy arrays)\r\n\r\nHints to make reviewing easier:\r\n* feel free to ignore the extension type part because it's related to PyArrow internals.\r\n* also, let me know if we are too strict/ too flexible in terms of types the Image feature can encode/decode. Hints:\r\n * `encode_example` handles encoding during dataset generation (you can think of it as `yield key, features.encode_example(example)`)\r\n * `objects_to_list_of_image_dicts` handles encoding of returned examples in `map`\r\n\r\nP.S. I'll fork the PR branch and start adding the Image feature to the existing image datasets (will also update the `ImageClassification` template while doing that).",
"> WDYT about having an option to skip decoding for complex features, e. g. Audio(decode=False)?\r\n\r\nYes definitely, also I think it could be useful for the dataset viewer to not decode the data but instead return either the bytes or the (possibly chained) URL. cc @severo ",
"We want to merge this today/tomorrow, so I'd really appreciate your reviews @sgugger @nateraw.\r\n\r\nAlso, you can test this feature on the existing image datasets (MNIST, beans, food101, ...) by installing `datasets` from the PR branch:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@adapt-image-datasets\r\n```\r\n",
"Thanks for the review @nateraw!\r\n\r\n1. This is a copy of your notebook with the fixed map call: https://colab.research.google.com/gist/mariosasko/e351a717682a9392ca03908e65a2600e/image-feature-demo.ipynb\r\n (Sorry for misleading you with the map call in my un-updated notebook)\r\n Also, we can avoid this cast by trying to infer the type of the column (`\"pixel_values\"`) returned by the image feature extractor (we are already doing something similar for the columns with names: `\"attention_mask\"`, `\"input_ids\"`, ...). I plan to add this QOL improvement soon. \r\n2. It should work OK even without updating Pillow and PyArrow (these two libraries are pre-installed in Colab, so updating them requires a restart of the runtime). \r\n > I noticed an error that I'm guessing you ran into when I tried using the older version\r\n\r\n Do you recall which type of error it was because everything works fine on my side if I run the notebooks with the lowest supported version of Pillow (`6.2.1`)?",
"Thanks for playing with it @nateraw and for sharing your notebook, this is useful :)\r\n\r\nI think this is ready now, congrats @mariosasko !",
"Love this feature and hope to release soon!"
] | "2021-10-25T19:07:48Z" | "2021-12-30T06:37:21Z" | "2021-12-06T17:49:02Z" | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3163.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3163",
"merged_at": "2021-12-06T17:49:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3163.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3163"
} | Adds the Image feature. This feature is heavily inspired by the recently added Audio feature (#2324). Currently, this PR is pretty simple.
Some considerations that need further discussion:
* I've decided to use `Pillow`/`PIL` as the image decoding library. Another candidate I considered is `torchvision`, mostly because of its `accimage` backend, which should be faster for loading `jpeg` images than `Pillow`. However, `torchvision`'s io module only supports png and jpeg images, has `torch` as a hard dependency, and requires magic to work with image bytes ( `torch.ByteTensor(torch.ByteStorage.from_buffer(image_bytes)))`).
* Currently, I'm converting `PIL`'s `Image` type to `np.ndarray`. The vision models in Transformers such as ViT prefer the raw `Image` type and not the decoded tensors, so there is a small overhead due to [this conversion](https://github.com/huggingface/transformers/blob/3e8761ab8077e3bb243fe2f78b2a682bd2257cf1/src/transformers/image_utils.py#L62-L73). IMO this is justified to keep this part aligned with the Audio feature, which also returns `np.ndarray`. What do you think?
* Still have to work on the channel decoding logic:
* PyTorch prefers the channel-first ordering (C, H, W); TF and Flax the channel-last ordering (H, W, C). One cool feature would be adjusting the channel order based on the selected formatter (`torch`, `tf`, `jax`).
* By default, `Image.open` returns images of shape (H, W, C). However, ViT's feature extractor expects the format (C, H, W) if the image is passed as an array (explained [here](https://huggingface.co/transformers/model_doc/vit.html#transformers.ViTFeatureExtractor.__call__)), so I'm more inclined to the format (C, H, W). Which one do you prefer, (C, H, W) or (H, W, C)?
* Are there any options you'd like to see? (the user could change those via `cast_column`, such as `sampling_rate` in the Audio feature)
TODOs:
* [x] tests
* in subsequent PRs:
* docs - a section in the docs, which gives some additional info on the Image and Audio feature and compares them to
`ArrayND`
* streaming (waiting for #3129 and #3133 to get merged first)
* update the image tasks and the datasets to use the new feature
* Image/Audio formatting
[Colab Notebook](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c?usp=sharing) where you can play with this feature.
I'm also adding a link to the [Image](https://github.com/tensorflow/datasets/blob/7ac7d506488d46038a5854961d068926b3f93c7f/tensorflow_datasets/core/features/image_feature.py#L155) feature in TFDS because one of our goals is to parse TFDS scripts eventually, so our Image feature has to (at least) support all the formats theirs does.
Feel free to cc anyone who might be interested.
P.S. Please ignore the changes in the `datasets/**/*.py` files 😄. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 7,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3163/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3163/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4473/comments | https://api.github.com/repos/huggingface/datasets/issues/4473/events | https://github.com/huggingface/datasets/pull/4473 | 1,267,555,994 | PR_kwDODunzps45d5-R | 4,473 | Add SST-2 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"on the hub this dataset is referenced as `sst-2` not `sst2` – is there a canonical orthography? If not, could we name it `sst-2`?",
"@julien-c, we normally do not use hyphens for dataset names: whenever the original dataset name contains a hyphen, we usually:\r\n- either suppress it: CoNLL-2000 (`conll2000`), CORD-19 (`cord19`)\r\n- or replace it with underscore: CC-News (`cc_news`), SQuAD-es (`squad_es`)\r\n\r\nThere are some exceptions though... (I wonder why)\r\n\r\nI think, the reason is there was a 1-to-1 relation with the corresponding Python module name.\r\n\r\nI personally find confusing not having a rule and using both hyphens and underscores indistinctly: you never know which is the right orthography.\r\n\r\nWhichever the decision we make, I would prefer to be applied consistently.\r\n\r\nAlso note that we already implemented this dataset as part of GLUE: https://github.com/huggingface/datasets/blob/master/datasets/glue/glue.py#L163\r\n- dataset name: `glue`\r\n- config name: `sst2`\r\n\r\nOn the other hand, let's see how other libraries name it:\r\n- torchtext: `SST2` https://pytorch.org/text/stable/datasets.html#sst2\r\n- OpenAI CLIP: `rendered-sst2` https://github.com/openai/CLIP/blob/main/data/rendered-sst2.md\r\n- Kaggle: `SST2` https://www.kaggle.com/datasets/atulanandjha/stanford-sentiment-treebank-v2-sst2/version/22\r\n- TensorFlow Datasets: `glue/sst2` https://www.tensorflow.org/datasets/catalog/glue#gluesst2",
"Ok, another option is to open PRs against the models in https://huggingface.co/models?datasets=sst-2 to change their dataset reference to `sst2`\r\n\r\n(BTW some models refer to `sst2` already – but they're less popular: https://huggingface.co/models?datasets=sst2)",
"OK, I'm taking care of the subsequent PRs on models to align with this dataset name."
] | "2022-06-10T13:37:26Z" | "2022-06-13T14:11:34Z" | "2022-06-13T14:01:09Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4473.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4473",
"merged_at": "2022-06-13T14:01:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4473.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4473"
} | Add SST-2 dataset.
Currently it is part of GLUE benchmark.
This PR adds it as a standalone dataset.
CC: @julien-c | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4473/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4473/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5898/comments | https://api.github.com/repos/huggingface/datasets/issues/5898/events | https://github.com/huggingface/datasets/issues/5898 | 1,726,190,481 | I_kwDODunzps5m45OR | 5,898 | Loading The flores data set for specific language | {
"avatar_url": "https://avatars.githubusercontent.com/u/36159918?v=4",
"events_url": "https://api.github.com/users/106AbdulBasit/events{/privacy}",
"followers_url": "https://api.github.com/users/106AbdulBasit/followers",
"following_url": "https://api.github.com/users/106AbdulBasit/following{/other_user}",
"gists_url": "https://api.github.com/users/106AbdulBasit/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/106AbdulBasit",
"id": 36159918,
"login": "106AbdulBasit",
"node_id": "MDQ6VXNlcjM2MTU5OTE4",
"organizations_url": "https://api.github.com/users/106AbdulBasit/orgs",
"received_events_url": "https://api.github.com/users/106AbdulBasit/received_events",
"repos_url": "https://api.github.com/users/106AbdulBasit/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/106AbdulBasit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/106AbdulBasit/subscriptions",
"type": "User",
"url": "https://api.github.com/users/106AbdulBasit"
} | [] | closed | false | null | [] | null | [
"got that the syntax is like this\r\n\r\ndataset = load_dataset(\"facebook/flores\", \"ace_Arab\")"
] | "2023-05-25T17:08:55Z" | "2023-05-25T17:21:38Z" | "2023-05-25T17:21:37Z" | NONE | null | null | null | ### Describe the bug
I am trying to load the Flores data set
the code which is given is
```
from datasets import load_dataset
dataset = load_dataset("facebook/flores")
```
This gives the error of config name
""ValueError: Config name is missing"
Now if I add some config it gives me the some error
"HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook/flores, 'ace_Arab''.
"
How I can load the data of the specific language ?
Couldn't find any tutorial
any one can help me out?
### Steps to reproduce the bug
step one load the data set
`from datasets import load_dataset
dataset = load_dataset("facebook/flores")`
it gives the error of config
once config is given
it gives the error of
"HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook/flores, 'ace_Arab''.
"
### Expected behavior
Data set should be loaded but I am receiving error
### Environment info
Datasets , python , | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5898/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5898/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2434/comments | https://api.github.com/repos/huggingface/datasets/issues/2434/events | https://github.com/huggingface/datasets/issues/2434 | 907,503,557 | MDU6SXNzdWU5MDc1MDM1NTc= | 2,434 | Extend QuestionAnsweringExtractive template to handle nested columns | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"this is also the case for the following datasets and configurations:\r\n\r\n* `mlqa` with config `mlqa-translate-train.ar`\r\n\r\n",
"The current task API is somewhat deprecated (we plan to align it with `train eval index` at some point), so I think we can close this issue."
] | "2021-05-31T14:06:51Z" | "2022-10-05T17:06:28Z" | "2022-10-05T17:06:28Z" | MEMBER | null | null | null | Currently the `QuestionAnsweringExtractive` task template and `preprare_for_task` only support "flat" features. We should extend the functionality to cover QA datasets like:
* `iapp_wiki_qa_squad`
* `parsinlu_reading_comprehension`
where the nested features differ with those from `squad` and trigger an `ArrowNotImplementedError`:
```
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
<ipython-input-12-50e5b8f69c20> in <module>
----> 1 ds.prepare_for_task("question-answering-extractive")[0]
~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task)
1436 # We found a template so now flush `DatasetInfo` to skip the template update in `DatasetInfo.__post_init__`
1437 dataset.info.task_templates = None
-> 1438 dataset = dataset.cast(features=template.features)
1439 return dataset
1440
~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc)
977 format = self.format
978 dataset = self.with_format("arrow")
--> 979 dataset = dataset.map(
980 lambda t: t.cast(schema),
981 batched=True,
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1600
1601 if num_proc is None or num_proc == 1:
-> 1602 return self._map_single(
1603 function=function,
1604 with_indices=with_indices,
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
176 }
177 # apply actual function
--> 178 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
179 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
180 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
395 # Call actual function
396
--> 397 out = func(self, *args, **kwargs)
398
399 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, desc)
1940 ) # Something simpler?
1941 try:
-> 1942 batch = apply_function_on_filtered_inputs(
1943 batch,
1944 indices,
~/git/datasets/src/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
1836 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset
1837 processed_inputs = (
-> 1838 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1839 )
1840 if update_data is None:
~/git/datasets/src/datasets/arrow_dataset.py in <lambda>(t)
978 dataset = self.with_format("arrow")
979 dataset = dataset.map(
--> 980 lambda t: t.cast(schema),
981 batched=True,
982 batch_size=batch_size,
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.cast()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.ChunkedArray.cast()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/compute.py in cast(arr, target_type, safe)
241 else:
242 options = CastOptions.unsafe(target_type)
--> 243 return call_function("cast", [arr], options)
244
245
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/miniconda3/envs/datasets/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from struct<answer_end: list<item: int32>, answer_start: list<item: int32>, text: list<item: string>> to struct using function cast_struct
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2434/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2434/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1919/comments | https://api.github.com/repos/huggingface/datasets/issues/1919/events | https://github.com/huggingface/datasets/issues/1919 | 812,626,872 | MDU6SXNzdWU4MTI2MjY4NzI= | 1,919 | Failure to save with save_to_disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/M-Salti",
"id": 9285264,
"login": "M-Salti",
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/M-Salti"
} | [] | closed | false | null | [] | null | [
"Hi thanks for reporting and for proposing a fix :)\r\n\r\nI just merged a fix, feel free to try it from the master branch !",
"Closing since this has been fixed by #1923"
] | "2021-02-20T14:18:10Z" | "2021-03-03T17:40:27Z" | "2021-03-03T17:40:27Z" | CONTRIBUTOR | null | null | null | When I try to save a dataset locally using the `save_to_disk` method I get the error:
```bash
FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow'
```
To replicate:
1. Install `datasets` from master
2. Run this code:
```python
from datasets import load_dataset
squad = load_dataset("squad") # or any other dataset
squad.save_to_disk("squad") # error here
```
The problem is that the method is not creating a directory with the name `dataset_path` for saving the dataset in (i.e. it's not creating the *train* and *validation* directories in this case). After creating the directory the problem resolves.
I'll open a PR soon doing that and linking this issue.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1919/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1919/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5395/comments | https://api.github.com/repos/huggingface/datasets/issues/5395/events | https://github.com/huggingface/datasets/pull/5395 | 1,513,997,335 | PR_kwDODunzps5GXLUl | 5,395 | Temporarily pin pydantic test dependency | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012220 / 0.011353 (0.000867) | 0.005943 / 0.011008 (-0.005065) | 0.128223 / 0.038508 (0.089715) | 0.037352 / 0.023109 (0.014242) | 0.397143 / 0.275898 (0.121245) | 0.483935 / 0.323480 (0.160455) | 0.010279 / 0.007986 (0.002293) | 0.004842 / 0.004328 (0.000513) | 0.101403 / 0.004250 (0.097153) | 0.042935 / 0.037052 (0.005883) | 0.421642 / 0.258489 (0.163153) | 0.456328 / 0.293841 (0.162487) | 0.065639 / 0.128546 (-0.062907) | 0.019820 / 0.075646 (-0.055826) | 0.426090 / 0.419271 (0.006818) | 0.069583 / 0.043533 (0.026051) | 0.402662 / 0.255139 (0.147523) | 0.428826 / 0.283200 (0.145626) | 0.116760 / 0.141683 (-0.024923) | 1.806216 / 1.452155 (0.354061) | 1.852629 / 1.492716 (0.359913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226555 / 0.018006 (0.208548) | 0.584693 / 0.000490 (0.584203) | 0.008612 / 0.000200 (0.008412) | 0.000205 / 0.000054 (0.000150) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028393 / 0.037411 (-0.009018) | 0.123355 / 0.014526 (0.108829) | 0.134423 / 0.176557 (-0.042133) | 0.188536 / 0.737135 (-0.548600) | 0.141595 / 0.296338 (-0.154743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.589359 / 0.215209 (0.374150) | 5.974655 / 2.077655 (3.897001) | 2.465580 / 1.504120 (0.961460) | 2.007618 / 1.541195 (0.466424) | 2.078788 / 1.468490 (0.610298) | 1.216646 / 4.584777 (-3.368131) | 5.217516 / 3.745712 (1.471804) | 3.107188 / 5.269862 (-2.162674) | 2.251641 / 4.565676 (-2.314036) | 0.138640 / 0.424275 (-0.285635) | 0.015046 / 0.007607 (0.007439) | 0.780092 / 0.226044 (0.554048) | 7.749564 / 2.268929 (5.480635) | 3.080708 / 55.444624 (-52.363917) | 2.393897 / 6.876477 (-4.482579) | 2.387738 / 2.142072 (0.245665) | 1.458844 / 4.805227 (-3.346384) | 0.252476 / 6.500664 (-6.248188) | 0.076594 / 0.075469 (0.001125) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.540868 / 1.841788 (-0.300919) | 17.295684 / 8.074308 (9.221376) | 19.669300 / 10.191392 (9.477908) | 0.250315 / 0.680424 (-0.430109) | 0.045068 / 0.534201 (-0.489133) | 0.538840 / 0.579283 (-0.040443) | 0.584443 / 0.434364 (0.150079) | 0.614476 / 0.540337 (0.074138) | 0.729928 / 1.386936 (-0.657008) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009218 / 0.011353 (-0.002135) | 0.006261 / 0.011008 (-0.004747) | 0.125541 / 0.038508 (0.087033) | 0.034405 / 0.023109 (0.011296) | 0.468381 / 0.275898 (0.192483) | 0.503336 / 0.323480 (0.179856) | 0.006839 / 0.007986 (-0.001146) | 0.004724 / 0.004328 (0.000396) | 0.097875 / 0.004250 (0.093625) | 0.051278 / 0.037052 (0.014225) | 0.473323 / 0.258489 (0.214834) | 0.537392 / 0.293841 (0.243551) | 0.055588 / 0.128546 (-0.072958) | 0.021041 / 0.075646 (-0.054605) | 0.416952 / 0.419271 (-0.002320) | 0.070128 / 0.043533 (0.026595) | 0.465224 / 0.255139 (0.210085) | 0.504678 / 0.283200 (0.221478) | 0.112504 / 0.141683 (-0.029179) | 1.865865 / 1.452155 (0.413710) | 1.988296 / 1.492716 (0.495580) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.314170 / 0.018006 (0.296164) | 0.526726 / 0.000490 (0.526236) | 0.018691 / 0.000200 (0.018491) | 0.000128 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033772 / 0.037411 (-0.003639) | 0.124796 / 0.014526 (0.110270) | 0.134700 / 0.176557 (-0.041856) | 0.190595 / 0.737135 (-0.546541) | 0.143205 / 0.296338 (-0.153133) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.656708 / 0.215209 (0.441499) | 6.470503 / 2.077655 (4.392848) | 2.866430 / 1.504120 (1.362310) | 2.506846 / 1.541195 (0.965651) | 2.548669 / 1.468490 (1.080179) | 1.226695 / 4.584777 (-3.358082) | 5.117866 / 3.745712 (1.372153) | 3.032822 / 5.269862 (-2.237040) | 1.999152 / 4.565676 (-2.566524) | 0.142974 / 0.424275 (-0.281301) | 0.015011 / 0.007607 (0.007404) | 0.799729 / 0.226044 (0.573684) | 8.286313 / 2.268929 (6.017385) | 3.636482 / 55.444624 (-51.808142) | 2.888038 / 6.876477 (-3.988439) | 2.924982 / 2.142072 (0.782910) | 1.471996 / 4.805227 (-3.333231) | 0.257119 / 6.500664 (-6.243545) | 0.077294 / 0.075469 (0.001825) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.608290 / 1.841788 (-0.233497) | 17.599119 / 8.074308 (9.524811) | 18.917086 / 10.191392 (8.725694) | 0.236237 / 0.680424 (-0.444187) | 0.026061 / 0.534201 (-0.508140) | 0.527359 / 0.579283 (-0.051925) | 0.589176 / 0.434364 (0.154812) | 0.602310 / 0.540337 (0.061973) | 0.726756 / 1.386936 (-0.660180) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png \"CML watermark\")\n",
"Issue reported to `pydantic`: \r\n- https://github.com/pydantic/pydantic/issues/4885\r\n\r\nFixing PR at `pydantic`:\r\n- https://github.com/pydantic/pydantic/pull/4886"
] | "2022-12-29T19:34:19Z" | "2022-12-30T06:36:57Z" | "2022-12-29T21:00:26Z" | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5395.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5395",
"merged_at": "2022-12-29T21:00:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5395.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5395"
} | Temporarily pin `pydantic` until a permanent solution is found.
Fix #5394. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5395/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5395/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6203/comments | https://api.github.com/repos/huggingface/datasets/issues/6203/events | https://github.com/huggingface/datasets/issues/6203 | 1,877,491,602 | I_kwDODunzps5v6D-S | 6,203 | Support loading from a DVC remote repository | {
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bilelomrani1",
"id": 16692099,
"login": "bilelomrani1",
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bilelomrani1"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"(cross-posting from the linked DVC issue)\r\n\r\nI think this should already work out of the box with the current `datasets` and `dvc.api` releases by passing the correct `storage_options` into the datasets calls. `storage_options` is essentially just the kwargs dict that gets passed to the fsspec fs constructor.\r\n\r\nThe main thing to note here is that the fsspec DVCFileSystem URL should be `dvc://folder/file.json` (i.e. this should be the DVCFileSystem path that is relative to the DVC repo root). You cannot use a URL like `https://gitlab.com/user/repo/folder/file.json`.\r\n\r\nI think something like this should work for you (in a venv where both DVC and datasets are installed):\r\n```python\r\nimport datasets\r\n\r\n# load a dataset from Git/DVC repository where Git repo is located at https://gitlab.com/user/repo.git\r\n# and path to dataset (relative to git/dvc repo root) is 'folder/file.json'\r\ndatasets.load_from_disk(\r\n \"dvc://folder/file.json\",\r\n storage_options={\"url\": \"https://gitlab.com/user/repo.git\"},\r\n)\r\n```\r\n\r\nbasically the `dvc://` is what tells fsspec to create a `DVCFileSystem` and it will construct it like\r\n```python\r\nfs = DVCFileSystem(**storage_options)\r\n```\r\n\r\nThen the subsequent calls use the rest of the `dvc://...` URL like \r\n```python\r\nfs.exists(\"folder/file.json\")\r\n```",
"Hi @pmrowla Thank you for your help, that's very helpful, I was indeed using `fsspec` incorrectly here. There is still an issue with `datasets`:\r\n\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset(\"json\", data_files=\"dvc://folder/file.jsonl\", storage_options={\"url\": \"https://gitlab.com/repo/folder/\"})\r\n```\r\n\r\nresults in the following exception:\r\n\r\n```\r\nTraceback (most recent call last): \r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/scmrepo/fs.py\", line 217, in info\r\n ret = self.trie.info(key)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/scmrepo/git/objects.py\", line 141, in info\r\n obj = self.trie[key]\r\n ~~~~~~~~~^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/pygtrie.py\", line 937, in __getitem__\r\n node, _ = self._get_node(key_or_slice)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/pygtrie.py\", line 630, in _get_node\r\n raise KeyError(key)\r\nKeyError: ('dvc:', 'datasets', 'spider', 'train.jsonl')\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/load.py\", line 2129, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/load.py\", line 1815, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/load.py\", line 1430, in dataset_module_factory\r\n ).get_module()\r\n ^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/load.py\", line 958, in get_module\r\n data_files = DataFilesDict.from_patterns(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/data_files.py\", line 674, in from_patterns\r\n DataFilesList.from_patterns(\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/data_files.py\", line 589, in from_patterns\r\n origin_metadata = _get_origin_metadata(data_files, download_config=download_config)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/data_files.py\", line 504, in _get_origin_metadata\r\n return thread_map(\r\n ^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/tqdm/contrib/concurrent.py\", line 69, in thread_map\r\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/tqdm/contrib/concurrent.py\", line 51, in _executor_map\r\n return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/_base.py\", line 619, in result_iterator\r\n yield _result_or_cancel(fs.pop())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/_base.py\", line 317, in _result_or_cancel\r\n return fut.result(timeout)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/_base.py\", line 456, in result\r\n return self.__get_result()\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/_base.py\", line 401, in __get_result\r\n raise self._exception\r\n File \"/Users/bilelomrani/.pyenv/versions/3.11.4/lib/python3.11/concurrent/futures/thread.py\", line 58, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/datasets/data_files.py\", line 491, in _get_single_origin_metadata\r\n info = fs.info(data_file)\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/dvc/fs/dvc.py\", line 357, in info\r\n return self._info(key, path, ignore_subrepos=ignore_subrepos)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/dvc/fs/dvc.py\", line 377, in _info\r\n fs_info = fs.info(fs_path)\r\n ^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/dvc_objects/fs/base.py\", line 501, in info\r\n return self.fs.info(path, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/bilelomrani/Documents/ILLUIN.nosync/instructions-finetuning/.venv/lib/python3.11/site-packages/scmrepo/fs.py\", line 221, in info\r\n raise FileNotFoundError(errno.ENOENT, os.strerror(errno.ENOENT), path)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/dvc:/folder/file.jsonl'\r\n```\r\n\r\nSomehow the URL gets turned into `/dvc:/folder/file.jsonl` inside `datasets`. Otherwise I can confirm that using `fsspec` properly with DVC works as expected.\r\n",
"For the record, there was a `dvc.api.DVCFileSystem` bug which is fixed in DVC `main` and will be available in the next DVC release.\r\n\r\nTo use DVC with `datasets` you just need to pass the Git/DVC repo `url` in `storage_options` as discussed above.\r\n\r\n(note that this requires having both `datasets` and `dvc` installed in your python environment)\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> load_dataset(\r\n... \"json\",\r\n... data_files=\"dvc://eval/metrics.json\",\r\n... storage_options={\"url\": \"https://github.com/iterative/example-get-started.git\"},\r\n... )\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['avg_prec', 'roc_auc'],\r\n num_rows: 1\r\n })\r\n})\r\n```\r\n\r\nAny additional `DVCFileSystem` args can be passed in the same way, so to get a specific branch/tag/commit from the DVC repo you just need to specify the `rev` in `storage_options` like\r\n```\r\nstorage_options={\"url\": \"https://github.com/iterative/example-get-started.git\", \"rev\": \"main\"}\r\n```\r\n\r\nI think this issue can probably be closed now.",
"Thank you for your help, closing."
] | "2023-09-01T14:04:52Z" | "2023-09-15T15:11:27Z" | "2023-09-15T15:11:27Z" | NONE | null | null | null | ### Feature request
Adding support for loading a file from a DVC repository, tracked remotely on a SCM.
### Motivation
DVC is a popular version control system to version and manage datasets. The files are stored on a remote object storage platform, but they are tracked using Git. Integration with DVC is possible through the `DVCFileSystem`.
I have a Gitlab repository where multiple files are tracked using DVC and stored in a GCP bucket. I would like to be able to load these files using `datasets` directly using an URL. My goal is to write a generic code that abstracts the storage layer, such that my users will only have to pass in an `fsspec`-compliant URL and the corresponding files will be loaded.
### Your contribution
I managed to instantiate a `DVCFileSystem` pointing to a Gitlab repo from a `fsspec` chained URL in [this pull request](https://github.com/iterative/dvc/pull/9903) to DVC.
```python
from fsspec.core import url_to_fs
fs, _ = url_to_fs("dvc::https://gitlab.com/repository/group/my-repo")
```
From now I'm not sure how to continue, it seems that `datasets` expects the URL to be fully qualified like so: `dvc::https://gitlab.com/repository/group/my-repo/my-folder/my-file.json` but this fails because `DVCFileSystem` expects the URL to point to the root of an SCM repo. Is there a way to make this work with `datasets`? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6203/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6203/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2288/comments | https://api.github.com/repos/huggingface/datasets/issues/2288/events | https://github.com/huggingface/datasets/issues/2288 | 871,111,235 | MDU6SXNzdWU4NzExMTEyMzU= | 2,288 | Load_dataset for local CSV files | {
"avatar_url": "https://avatars.githubusercontent.com/u/17052700?v=4",
"events_url": "https://api.github.com/users/sstojanoska/events{/privacy}",
"followers_url": "https://api.github.com/users/sstojanoska/followers",
"following_url": "https://api.github.com/users/sstojanoska/following{/other_user}",
"gists_url": "https://api.github.com/users/sstojanoska/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sstojanoska",
"id": 17052700,
"login": "sstojanoska",
"node_id": "MDQ6VXNlcjE3MDUyNzAw",
"organizations_url": "https://api.github.com/users/sstojanoska/orgs",
"received_events_url": "https://api.github.com/users/sstojanoska/received_events",
"repos_url": "https://api.github.com/users/sstojanoska/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sstojanoska/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sstojanoska/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sstojanoska"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthis is not a standard CSV file (requires additional preprocessing) so I wouldn't label this as s bug. You could parse the examples with the regex module or the string API to extract the data, but the following approach is probably the easiest (once you load the data):\r\n```python\r\nimport ast\r\n# load the dataset and copy the features\r\ndef process(ex):\r\n return {\"tokens\": ast.literal_eval(ex[\"tokens\"]), \"labels\": ast.literal_eval(ex[\"labels\"])}\r\ndataset = dataset.map(process, features=new_features)\r\n```\r\n",
"Hi,\r\n\r\nThanks for the reply.\r\nI have already used ```ast.literal_eval``` to evaluate the string into list, but I was getting another error:\r\n```\r\nArrowInvalid: Could not convert X with type str: tried to convert to int\r\n```\r\nWhy this happens ? Should labels be mapped to their ids and use int instead of str ?",
"Yes, just map the labels to their ids."
] | "2021-04-29T15:01:10Z" | "2021-06-15T13:49:26Z" | "2021-06-15T13:49:26Z" | NONE | null | null | null | The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ]
```
The method, loads each list as a string: (i.g "['I' , 'am', 'John']").
To solve this issue, I copied the Datasets.Features, created Sequence types ( instead of Value) and tried to cast the features type
```
new_features['tokens'] = Sequence(feature=Value(dtype='string', id=None))
new_features['labels'] = Sequence(feature=ClassLabel(num_classes=len(tag2idx), names=list(unique_tags)))
dataset = dataset.cast(new_features)
```
but I got the following error
```
ArrowNotImplementedError: Unsupported cast from string to list using function cast_list
```
Moreover, I tried to set feature parameter in load_dataset method, to my new_features, but this fails as well.
How can this be solved ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2288/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2288/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2556 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2556/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2556/comments | https://api.github.com/repos/huggingface/datasets/issues/2556/events | https://github.com/huggingface/datasets/issues/2556 | 931,595,872 | MDU6SXNzdWU5MzE1OTU4NzI= | 2,556 | Better DuplicateKeysError error to help the user debug the issue | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4",
"events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}",
"followers_url": "https://api.github.com/users/VijayKalmath/followers",
"following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}",
"gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VijayKalmath",
"id": 20517962,
"login": "VijayKalmath",
"node_id": "MDQ6VXNlcjIwNTE3OTYy",
"organizations_url": "https://api.github.com/users/VijayKalmath/orgs",
"received_events_url": "https://api.github.com/users/VijayKalmath/received_events",
"repos_url": "https://api.github.com/users/VijayKalmath/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VijayKalmath"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4",
"events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}",
"followers_url": "https://api.github.com/users/VijayKalmath/followers",
"following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}",
"gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VijayKalmath",
"id": 20517962,
"login": "VijayKalmath",
"node_id": "MDQ6VXNlcjIwNTE3OTYy",
"organizations_url": "https://api.github.com/users/VijayKalmath/orgs",
"received_events_url": "https://api.github.com/users/VijayKalmath/received_events",
"repos_url": "https://api.github.com/users/VijayKalmath/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VijayKalmath"
}
] | null | [
"excuse me, my `datasets` version is `2.2.2`, but I also just see the error info like \r\n```\r\nDuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 0\r\nKeys should be unique and deterministic in nature\r\n```",
"Hi ! for which dataset do you have this error ?\r\n\r\nAlso note that this issue is just about improving the error message, which is not very friendly x)",
"@lhoestq I would like to take a hit at improving the error message. Will open a draft PR and will reach out to you for review\r\n",
"> DuplicateKeysError: both 42th and 1337th examples have the same keys `48`.\r\n\r\n@lhoestq when you mention 42th and 1337th in the above case , are these values the examples' \"id\" or are they the examples' index ? ",
"Hi ! Thanks @VijayKalmath :)\r\n\r\nIn the general case, examples don't have an \"id\" field, so I think it should correspond to the index",
"@lhoestq , I have opened a draft PR for this Issue. \r\n\r\nI wanted to check with you if there is a way to get `<path/to/the/dataset/script>` currently or do I need to add extra code to find that. \r\n\r\nIf I need to find the script , I can assume that the generator function will always be in `datasets/{dataset_name}/{dataset_name}.py`. ",
"Thanks !\r\n\r\n> I wanted to check with you if there is a way to get <path/to/the/dataset/script> currently or do I need to add extra code to find that.\r\n\r\nYou don't have access to this info inside the ArrowWriter unfortunately. This info is available in builder.py in the DatasetBuilder code that uses the ArrowWriter though, maybe a try-catch there can do the job"
] | "2021-06-28T13:50:57Z" | "2022-06-28T09:26:04Z" | "2022-06-28T09:26:04Z" | MEMBER | null | null | null | As mentioned in https://github.com/huggingface/datasets/issues/2552 it would be nice to improve the error message when a dataset fails to build because there are duplicate example keys.
The current one is
```python
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 48
Keys should be unique and deterministic in nature
```
and we could have something that guides the user to debugging the issue:
```python
DuplicateKeysError: both 42th and 1337th examples have the same keys `48`.
Please fix the dataset script at <path/to/the/dataset/script>
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2556/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2556/timeline | null | completed | false |