The dataset viewer is not available for this split.
Error code: StreamingRowsError Exception: OSError Message: cannot find loader for this HDF5 file Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 197, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 77, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2097, in __iter__ example = _apply_feature_types_on_example( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1635, in _apply_feature_types_on_example decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2044, in decode_example return { File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 2045, in <dictcomp> column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1405, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/image.py", line 188, in decode_example image.load() # to avoid "Too many open files" errors File "/src/services/worker/.venv/lib/python3.9/site-packages/PIL/ImageFile.py", line 366, in load raise OSError(msg) OSError: cannot find loader for this HDF5 file
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Sentence Fragment Dataset
Labeled dataset for sentence fragment text classification derived from aclImdb_v1 with both positive and negative comments.
This is in encoded .h5 file.
Data Structure
The structure is as below:
{
"label": [...],
"input_ids": [...],
"attention_mask": [...]
}
label - int
(0: sentence fragment, 1: whole sentence)
input_ids - torch.tensor
(tokenized version of each sentence fragment in pytorch tensors)
attention_mask - torch.tensor
(attention mask return value of each sentence fragment in pytorch tensors)
NOTE: TO SEE THE TEXT, GO TO MY OTHER DATASET. HOWEVER, THIS DATASET WILL NOT LOAD THE INPUT ID AND ATTENTION MASK CORRECTLY INTO PYTORCH TENSORS.
Loading the Daaset
git clone https://huggingface.co/datasets/Inoob/SentenceFragmentsDataset
If that failed, try
git-lfs clone https://huggingface.co/datasets/Inoob/SentenceFragmentsDataset
Then, load it using h5py:
train split:
import h5py
import torch
with h5py.File('./SentenceFragmentsDataset/train.h5', 'r') as hf:
label = list(hf['label'])
input_ids = [torch.tensor(data) for data in hf['input_ids']]
attention_mask = [torch.tensor(data) for data in hf['attention_mask']]
data_train={"label":label, "input_ids": input_ids, "attention_mask":attention_mask}
test split:
import h5py
import torch
with h5py.File('./SentenceFragmentsDataset/test.h5', 'r') as hf:
label = list(hf['label'])
input_ids = [torch.tensor(data) for data in hf['input_ids']]
attention_mask = [torch.tensor(data) for data in hf['attention_mask']]
data_test={"label":label, "input_ids": input_ids, "attention_mask":attention_mask}
- Downloads last month
- 14