The dataset viewer is not available for this subset.
Exception: ReadTimeout Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 8b2ea446-0622-4dbb-94ac-52833663d6c4)') Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response for split in get_dataset_split_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 352, in get_dataset_split_names info = get_dataset_config_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 277, in get_dataset_config_info builder = load_dataset_builder( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1849, in load_dataset_builder dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1731, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1688, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1067, in get_module data_files = DataFilesDict.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 721, in from_patterns else DataFilesList.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 634, in from_patterns origin_metadata = _get_origin_metadata(data_files, download_config=download_config) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 548, in _get_origin_metadata return thread_map( File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py", line 1169, in __iter__ for obj in iterable: File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 609, in result_iterator yield fs.pop().result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 446, in result return self.__get_result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 527, in _get_single_origin_metadata resolved_path = fs.resolve_path(data_file) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist self._api.repo_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2704, in repo_info return method( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2561, in dataset_info r = get_session().get(path, headers=headers, timeout=timeout, params=params) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get return self.request("GET", url, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 93, in send return super().send(request, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send raise ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 8b2ea446-0622-4dbb-94ac-52833663d6c4)')
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Trash Classification Project
Overview
This project focuses on trash classification using machine learning, leveraging multiple datasets with a total of 8,895 images from diverse sources.
Datasets
1. Drinking Waste Classification
- Images: 4,832
- Organization: Directory-based sorting
- Classes: 4 recyclable categories
- Aluminium Cans
- Glass Bottles
- PET (Plastic) Bottles
- HDPE (Plastic) Milk Bottles
2. TACO (Trash Annotations in Context)
- Images: 1,530
- Environment: Diverse settings (woods, roads, beaches)
- Format: Raw images with annotation JSON
- Note: Requires category mapping for proper sorting
3. TrashNet
- Images: 2,533
- Organization: Directory-based sorting
- Classes: 6 categories
- Cardboard
- Glass
- Metal
- Paper
- Plastic
- Trash (miscellaneous)
4. Google Images API
- Images: 4,500
- Organization: Directory-based sorting
- Collection: Scraping images from google images API to expand dataset.
Data Processing Pipeline
Image Augmentation
We apply 14 different manipulations to expand the dataset:
Transformation | Description |
---|---|
Grayscale | Convert to grayscale |
Rotation | 90°, 180°, 270° rotations |
Flipping | Horizontal and vertical flips |
Noise | Add random noise |
Blur | Apply Gaussian blur |
Brightness | Brighten and darken |
Color Effects | Invert colors, posterize, solarize |
Equalization | Histogram equalization |
Standardization
- All images are resized to 224×224 pixels
- Stored as NumPy arrays (.npy) or PyTorch tensors
HuggingFace Dataset Upload Instructions
Generate Image Variations
python img_manipulation.py
Standardize Images
python standardize.py
Upload to HuggingFace
# Install HuggingFace CLI pip install huggingface_hub # Upload files python upload_files.py
Note: Modify paths in upload_files.py to point to your data
Model Architecture: ResNet
We implement a ResNet (Residual Network) architecture for our trash classification task, leveraging the power of deep residual learning. ResNet represents a significant advancement over traditional Convolutional Neural Networks (CNNs) by introducing skip connections that allow information to bypass layers. This solution addresses the vanishing gradient problem that plagued deep networks, where gradients become extremely small during backpropagation, preventing effective training of deeper layers.
Unlike conventional CNNs where performance degrades as network depth increases beyond a certain point, ResNets can be substantially deeper (50, 101, or even 152 layers) while maintaining or improving accuracy. The key innovation is the residual block structure, which learns residual mappings instead of direct mappings, making optimization easier. This allows the network to decide whether to use or skip certain layers during training, effectively creating an ensemble of networks with different depths.
For our trash classification task, this architecture provides superior feature extraction capabilities, capturing both fine-grained details and higher-level abstractions necessary for distinguishing between various waste materials.
Key Features
- Architecture: ResNet with Bottleneck blocks
- Implementation: Built using TinyGrad for efficient training
- Structure:
- Initial 7×7 convolution with stride 2
- Four residual layers with bottleneck blocks
- Global average pooling
- Fully connected layer for classification
- Residual Learning: Uses skip connections to address the vanishing gradient problem
- Configuration:
- Input size: 224×224×3 (RGB images)
- Output: 3 classes (Compostable, Non-recyclable, Recyclable)
Training Process
- Optimizer: SGD with momentum (0.9)
- Learning Rate: 0.001
- Batch Size: Variable (configurable)
- Metrics: Accuracy, Precision, Recall, F1-score
Performance Evaluation
The model is evaluated on a held-out test set with comprehensive metrics to ensure robust classification across all waste categories.
- Downloads last month
- 153