parquet-converter commited on
Commit
5f6529f
•
1 Parent(s): 08e0f81

Update parquet files

Browse files
README.dataset.txt DELETED
@@ -1,16 +0,0 @@
1
- # undefined > raw-images_640by640
2
- https://public.roboflow.ai/object-detection/undefined
3
-
4
- Provided by undefined
5
- License: CC BY 4.0
6
-
7
- This project is trying to create an efficient computer or machine vision model to detect different kinds of construction equipment in construction sites and we are starting with **three classes which are excavators, trucks, and wheel loaders.**
8
-
9
- The **dataset is provided by [Mohamed Sabek](https://www.linkedin.com/in/mohammadsabek/)**, a Spring 2022 Master of Science graduate from Arizona State University in [Construction Management and Technology](https://graduate.engineering.asu.edu/construction-management/).
10
-
11
- The raw images (v1) contains:
12
- 1. 1,532 annotated examples of "excavators"
13
- 2. 1,269 annotated examples of "dump truck"
14
- 3. 1,080 annotated examples of "wheel loader"
15
-
16
- **Note:** versions 2 and 3 (v2 and v3) contain the raw images resized at 416 by 416 (stretch to) and 640 by 640 (stretch to) without any augmentations.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,83 +0,0 @@
1
- ---
2
- task_categories:
3
- - object-detection
4
- tags:
5
- - roboflow
6
- - roboflow2huggingface
7
- - Manufacturing
8
- - Construction
9
- - Machinery
10
- ---
11
-
12
- <div align="center">
13
- <img width="640" alt="keremberke/excavator-detector" src="https://huggingface.co/datasets/keremberke/excavator-detector/resolve/main/thumbnail.jpg">
14
- </div>
15
-
16
- ### Dataset Labels
17
-
18
- ```
19
- ['excavators', 'dump truck', 'wheel loader']
20
- ```
21
-
22
-
23
- ### Number of Images
24
-
25
- ```json
26
- {'test': 144, 'train': 2245, 'valid': 267}
27
- ```
28
-
29
-
30
- ### How to Use
31
-
32
- - Install [datasets](https://pypi.org/project/datasets/):
33
-
34
- ```bash
35
- pip install datasets
36
- ```
37
-
38
- - Load the dataset:
39
-
40
- ```python
41
- from datasets import load_dataset
42
-
43
- ds = load_dataset("keremberke/excavator-detector", name="full")
44
- example = ds['train'][0]
45
- ```
46
-
47
- ### Roboflow Dataset Page
48
- [https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0/dataset/3](https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0/dataset/3?ref=roboflow2huggingface)
49
-
50
- ### Citation
51
-
52
- ```
53
- @misc{ excavators-cwlh0_dataset,
54
- title = { Excavators Dataset },
55
- type = { Open Source Dataset },
56
- author = { Mohamed Sabek },
57
- howpublished = { \\url{ https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0 } },
58
- url = { https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0 },
59
- journal = { Roboflow Universe },
60
- publisher = { Roboflow },
61
- year = { 2022 },
62
- month = { nov },
63
- note = { visited on 2023-01-16 },
64
- }
65
- ```
66
-
67
- ### License
68
- CC BY 4.0
69
-
70
- ### Dataset Summary
71
- This dataset was exported via roboflow.ai on April 4, 2022 at 8:56 AM GMT
72
-
73
- It includes 2656 images.
74
- Excavator are annotated in COCO format.
75
-
76
- The following pre-processing was applied to each image:
77
- * Auto-orientation of pixel data (with EXIF-orientation stripping)
78
- * Resize to 640x640 (Stretch)
79
-
80
- No image augmentation techniques were applied.
81
-
82
-
83
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.roboflow.txt DELETED
@@ -1,16 +0,0 @@
1
-
2
- Excavators - v3 raw-images_640by640
3
- ==============================
4
-
5
- This dataset was exported via roboflow.ai on April 4, 2022 at 8:56 AM GMT
6
-
7
- It includes 2656 images.
8
- Excavator are annotated in COCO format.
9
-
10
- The following pre-processing was applied to each image:
11
- * Auto-orientation of pixel data (with EXIF-orientation stripping)
12
- * Resize to 640x640 (Stretch)
13
-
14
- No image augmentation techniques were applied.
15
-
16
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
excavator-detector.py DELETED
@@ -1,152 +0,0 @@
1
- import collections
2
- import json
3
- import os
4
-
5
- import datasets
6
-
7
-
8
- _HOMEPAGE = "https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0/dataset/3"
9
- _LICENSE = "CC BY 4.0"
10
- _CITATION = """\
11
- @misc{ excavators-cwlh0_dataset,
12
- title = { Excavators Dataset },
13
- type = { Open Source Dataset },
14
- author = { Mohamed Sabek },
15
- howpublished = { \\url{ https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0 } },
16
- url = { https://universe.roboflow.com/mohamed-sabek-6zmr6/excavators-cwlh0 },
17
- journal = { Roboflow Universe },
18
- publisher = { Roboflow },
19
- year = { 2022 },
20
- month = { nov },
21
- note = { visited on 2023-01-16 },
22
- }
23
- """
24
- _CATEGORIES = ['excavators', 'dump truck', 'wheel loader']
25
- _ANNOTATION_FILENAME = "_annotations.coco.json"
26
-
27
-
28
- class EXCAVATORDETECTORConfig(datasets.BuilderConfig):
29
- """Builder Config for excavator-detector"""
30
-
31
- def __init__(self, data_urls, **kwargs):
32
- """
33
- BuilderConfig for excavator-detector.
34
-
35
- Args:
36
- data_urls: `dict`, name to url to download the zip file from.
37
- **kwargs: keyword arguments forwarded to super.
38
- """
39
- super(EXCAVATORDETECTORConfig, self).__init__(version=datasets.Version("1.0.0"), **kwargs)
40
- self.data_urls = data_urls
41
-
42
-
43
- class EXCAVATORDETECTOR(datasets.GeneratorBasedBuilder):
44
- """excavator-detector object detection dataset"""
45
-
46
- VERSION = datasets.Version("1.0.0")
47
- BUILDER_CONFIGS = [
48
- EXCAVATORDETECTORConfig(
49
- name="full",
50
- description="Full version of excavator-detector dataset.",
51
- data_urls={
52
- "train": "https://huggingface.co/datasets/keremberke/excavator-detector/resolve/main/data/train.zip",
53
- "validation": "https://huggingface.co/datasets/keremberke/excavator-detector/resolve/main/data/valid.zip",
54
- "test": "https://huggingface.co/datasets/keremberke/excavator-detector/resolve/main/data/test.zip",
55
- },
56
- ),
57
- EXCAVATORDETECTORConfig(
58
- name="mini",
59
- description="Mini version of excavator-detector dataset.",
60
- data_urls={
61
- "train": "https://huggingface.co/datasets/keremberke/excavator-detector/resolve/main/data/valid-mini.zip",
62
- "validation": "https://huggingface.co/datasets/keremberke/excavator-detector/resolve/main/data/valid-mini.zip",
63
- "test": "https://huggingface.co/datasets/keremberke/excavator-detector/resolve/main/data/valid-mini.zip",
64
- },
65
- )
66
- ]
67
-
68
- def _info(self):
69
- features = datasets.Features(
70
- {
71
- "image_id": datasets.Value("int64"),
72
- "image": datasets.Image(),
73
- "width": datasets.Value("int32"),
74
- "height": datasets.Value("int32"),
75
- "objects": datasets.Sequence(
76
- {
77
- "id": datasets.Value("int64"),
78
- "area": datasets.Value("int64"),
79
- "bbox": datasets.Sequence(datasets.Value("float32"), length=4),
80
- "category": datasets.ClassLabel(names=_CATEGORIES),
81
- }
82
- ),
83
- }
84
- )
85
- return datasets.DatasetInfo(
86
- features=features,
87
- homepage=_HOMEPAGE,
88
- citation=_CITATION,
89
- license=_LICENSE,
90
- )
91
-
92
- def _split_generators(self, dl_manager):
93
- data_files = dl_manager.download_and_extract(self.config.data_urls)
94
- return [
95
- datasets.SplitGenerator(
96
- name=datasets.Split.TRAIN,
97
- gen_kwargs={
98
- "folder_dir": data_files["train"],
99
- },
100
- ),
101
- datasets.SplitGenerator(
102
- name=datasets.Split.VALIDATION,
103
- gen_kwargs={
104
- "folder_dir": data_files["validation"],
105
- },
106
- ),
107
- datasets.SplitGenerator(
108
- name=datasets.Split.TEST,
109
- gen_kwargs={
110
- "folder_dir": data_files["test"],
111
- },
112
- ),
113
- ]
114
-
115
- def _generate_examples(self, folder_dir):
116
- def process_annot(annot, category_id_to_category):
117
- return {
118
- "id": annot["id"],
119
- "area": annot["area"],
120
- "bbox": annot["bbox"],
121
- "category": category_id_to_category[annot["category_id"]],
122
- }
123
-
124
- image_id_to_image = {}
125
- idx = 0
126
-
127
- annotation_filepath = os.path.join(folder_dir, _ANNOTATION_FILENAME)
128
- with open(annotation_filepath, "r") as f:
129
- annotations = json.load(f)
130
- category_id_to_category = {category["id"]: category["name"] for category in annotations["categories"]}
131
- image_id_to_annotations = collections.defaultdict(list)
132
- for annot in annotations["annotations"]:
133
- image_id_to_annotations[annot["image_id"]].append(annot)
134
- filename_to_image = {image["file_name"]: image for image in annotations["images"]}
135
-
136
- for filename in os.listdir(folder_dir):
137
- filepath = os.path.join(folder_dir, filename)
138
- if filename in filename_to_image:
139
- image = filename_to_image[filename]
140
- objects = [
141
- process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]]
142
- ]
143
- with open(filepath, "rb") as f:
144
- image_bytes = f.read()
145
- yield idx, {
146
- "image_id": image["id"],
147
- "image": {"path": filepath, "bytes": image_bytes},
148
- "width": image["width"],
149
- "height": image["height"],
150
- "objects": objects,
151
- }
152
- idx += 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/valid.zip → full/excavator-detector-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b6b92befa2c6c74e08dc60b2cdadb091375813aa656454d43d0d35adc98b55a3
3
- size 18899003
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57fc2348e04878c1e9fb5822a67649eddaa36e8b4a52ab10f41aa77b530fd8f1
3
+ size 10300078
data/train.zip → full/excavator-detector-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:57d7d0d67bea2f3a16359e8d6c171a4e8b5948f34f9aff79cf047f150dfce01f
3
- size 163840131
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:332a63c1d4cf3eb5c15e30d8e0e126706d3c2849442a8ece87a8e63b376a2488
3
+ size 164190371
data/test.zip → full/excavator-detector-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a533021ec10e6a5c044915fa21645c21d47be955eb04740d0ee7b8d602284462
3
- size 10272719
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5dcf3b91464166265d59ef2ebb6dc5b78544ccf08e6c8f2ab4842cfb253a48fc
3
+ size 18936488
thumbnail.jpg → mini/excavator-detector-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:784b5c44cc393edda704efc11c391f23e2d6d8af02f728d5f62a4d13da0772bd
3
- size 167653
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed1b85a4d8368a9d8ac3a3c4f6284d1d531bc46fbd6f0ef8917154cadec933f2
3
+ size 166025
data/valid-mini.zip → mini/excavator-detector-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2545a0a9aa81822f2f0301800305f94a2d2d0b0467f2a7ed5f4aaf7f03644079
3
- size 160754
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed1b85a4d8368a9d8ac3a3c4f6284d1d531bc46fbd6f0ef8917154cadec933f2
3
+ size 166025
mini/excavator-detector-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed1b85a4d8368a9d8ac3a3c4f6284d1d531bc46fbd6f0ef8917154cadec933f2
3
+ size 166025
split_name_to_num_samples.json DELETED
@@ -1 +0,0 @@
1
- {"test": 144, "train": 2245, "valid": 267}