Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 2 new columns ({'2', '1'}) and 3 missing columns ({'class', 'X', 'Y'}).

This happened while the csv dataset builder was generating data using

hf://datasets/Jerry-Master/lung-tumour-study/train/csv/(H&E), VH22B003751A003001 (x=17258.0, y=28669.0, w=1024.0, h=1024.0).class.csv (at revision 4416eb9cb366e59f45dde0ccfc21dc4e7ca9d9e4)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              1: int64
              2: int64
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 459
              to
              {'X': Value(dtype='float64', id=None), 'Y': Value(dtype='float64', id=None), 'class': Value(dtype='int64', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 2 new columns ({'2', '1'}) and 3 missing columns ({'class', 'X', 'Y'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/Jerry-Master/lung-tumour-study/train/csv/(H&E), VH22B003751A003001 (x=17258.0, y=28669.0, w=1024.0, h=1024.0).class.csv (at revision 4416eb9cb366e59f45dde0ccfc21dc4e7ca9d9e4)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

X
float64
Y
float64
class
int64
8.159036
244.171084
2
6.661491
372.403727
2
8.680851
396.768997
2
7.707124
802.546174
2
8.491979
892.559715
2
5.944563
985.285714
2
14.468223
75.053178
2
19.965702
446.685959
2
17.405817
846.963989
2
20.161342
117.116613
2
13.157706
420.422939
2
18.520744
655.404864
2
15.763926
687.34748
2
16.482684
36.463203
2
16.794643
203.25
1
28.525324
923.448763
2
20.504931
518.587771
2
21.776882
820.610215
2
21.506073
769.336032
2
26.105911
274.8867
2
37.176933
251.450852
2
33.786325
621.606838
2
34.947826
882.182609
2
38.810959
1,000.208219
2
40.017544
829.989474
2
42.390057
784.248566
2
46.946721
396.303279
2
49.955319
575.240426
2
48.388889
642.730159
2
51.786848
177.281179
2
51.372549
101.316176
2
48.365079
936.783069
2
50.207207
806.972973
2
62.706601
451.246944
2
58.207595
670.741772
2
62.490909
68.743434
2
59.3
142.824
2
61.879859
24.537102
1
63.743077
275.463077
2
67.182911
310.29773
2
59.805755
896.697842
2
69.835801
379.672948
2
64.153846
480.019231
2
72.369198
505.843882
2
69.145695
838.397351
2
70.029762
641.732143
2
73.099567
251.91342
2
75.912587
801.272727
2
76.913174
629.266467
2
73.990783
214.059908
1
78.775641
560.274038
2
77.64687
959.727127
2
79.542735
4.495726
2
83.313274
408.40354
2
82.598095
693.857143
2
79.655172
231.876847
2
80.827273
884.572727
2
84.87963
722.469136
2
85.163121
156.858156
2
96.341632
771.910097
2
90.944551
85.577438
2
93.47548
119.021322
2
96.80381
184.96
2
89.848341
652.383886
2
94.954198
431.183206
2
90.441558
907.344156
2
104.986047
512.016279
2
111.585438
141.748886
2
103.830189
681.418868
2
109.447552
573.442308
2
104.328
255.504
1
110.083799
819.519553
2
118.144828
727.397701
2
117.806202
236.034109
2
114.834586
767.265664
2
120.994652
940.019608
2
123.878661
1,011.343096
2
118.349515
697.38835
2
119.947368
41.889952
1
124.048837
93.053488
2
128.976888
183.309707
2
128.543253
210.33218
1
130.24898
283.144898
2
135.101449
835.808696
2
134.5625
893.338816
2
130.005917
873.497041
2
139.906137
481.88929
2
134.428571
814.290323
2
145.230444
856.860465
2
135.57
151.41
2
154.921914
6.453401
2
155.588732
261.752113
2
160.337209
402.582171
2
156.658147
649.522364
2
161.303977
753.355114
2
161.436556
194.89577
2
168.697531
723.117284
2
161.349544
972.492401
2
169.257426
292.633663
2
170.589744
611.333333
2
End of preview.

Combining graph neural networks and computer vision methods for cell nuclei classification in lung tissue

This is the dataset of the article in the title. It contains 85 patches of 1024x1024 pixels from H&E stained WSIs of 9 different patients. It contains two main classes: tumoural (2) and non tumoural (1). Due to the difficulty of the problem, 153 cells were labelled as uncertain. For technical reasons, we decided to eliminate them in the train and validation set and we carefully chose the test set so that it included no uncertain cell. In total there are 21255 cells in the train set, 4114 in the validation set and 5533 in the test set. We manually reviewed that no patient is in two splits at the same time, ensuring that the split has no data leakage in any way. This repo is just a copy of https://zenodo.org/doi/10.5281/zenodo.8368122.

Structure

The data is provided in several ways. In the orig folder you have the images without any annotation. Later in overlay the same images with the cells overlayed on top are provided for visualization purposes being red healthy cells and green the tumoural ones. Annotations were made using a software called QuPath, the raw geojson files extracted from the application are in raw_geojson. However, bear in mind that it may contain duplicated cells and uncertain cells. We are releasing it together with the scripts in the scripts folder so that any interested researcher can load the annotations back into QuPath and review the labels. If you, as an expert, believe we have incorrectly labelled some cells, please, feel free to contact us. The rest of the folders (train, test, validation) contain the data ready to use and with the same structure as specified in the tumourkit package documentation. Just move them into the data folder. Notice you will need to move the orig folder too.

Any pred or hov folder is provided as an example. They contain predictions from one of our models. If you were to train your own models, you should delete them. Also, the npy folders are crops of the original images of size 518x518. You can train Hovernet with other shapes if you want by modifying the code provided by the Tumourkit library.

Citation

@article{PerezCano2024,
  author = {Jose Pérez-Cano and Irene Sansano Valero and David Anglada-Rotger and Oscar Pina and Philippe Salembier and Ferran Marques},
  title = {Combining graph neural networks and computer vision methods for cell nuclei classification in lung tissue},
  journal = {Heliyon},
  year = {2024},
  volume = {10},
  number = {7},
  doi = {10.1016/j.heliyon.2024.e28463},
}
Downloads last month
397