WasteWise / README.md
luke-c's picture
Create README.md
ec9b424 verified

Trash Classification Project

Overview

This project focuses on trash classification using machine learning, leveraging multiple datasets with a total of 8,895 images from diverse sources.

Datasets

1. Drinking Waste Classification

  • Images: 4,832
  • Organization: Directory-based sorting
  • Classes: 4 recyclable categories
    • Aluminium Cans
    • Glass Bottles
    • PET (Plastic) Bottles
    • HDPE (Plastic) Milk Bottles

2. TACO (Trash Annotations in Context)

  • Images: 1,530
  • Environment: Diverse settings (woods, roads, beaches)
  • Format: Raw images with annotation JSON
  • Note: Requires category mapping for proper sorting

3. TrashNet

  • Images: 2,533
  • Organization: Directory-based sorting
  • Classes: 6 categories
    • Cardboard
    • Glass
    • Metal
    • Paper
    • Plastic
    • Trash (miscellaneous)

4. Google Images API

  • Images: 4,500
  • Organization: Directory-based sorting
  • Collection: Scraping images from google images API to expand dataset.

Data Processing Pipeline

Image Augmentation

We apply 14 different manipulations to expand the dataset:

Transformation Description
Grayscale Convert to grayscale
Rotation 90°, 180°, 270° rotations
Flipping Horizontal and vertical flips
Noise Add random noise
Blur Apply Gaussian blur
Brightness Brighten and darken
Color Effects Invert colors, posterize, solarize
Equalization Histogram equalization

Standardization

  • All images are resized to 224×224 pixels
  • Stored as NumPy arrays (.npy) or PyTorch tensors

HuggingFace Dataset Upload Instructions

  1. Generate Image Variations

    python img_manipulation.py
    
  2. Standardize Images

    python standardize.py
    
  3. Upload to HuggingFace

    # Install HuggingFace CLI
    pip install huggingface_hub
    
    # Upload files
    python upload_files.py
    

    Note: Modify paths in upload_files.py to point to your data

Model Architecture: ResNet

We implement a ResNet (Residual Network) architecture for our trash classification task, leveraging the power of deep residual learning. ResNet represents a significant advancement over traditional Convolutional Neural Networks (CNNs) by introducing skip connections that allow information to bypass layers. This solution addresses the vanishing gradient problem that plagued deep networks, where gradients become extremely small during backpropagation, preventing effective training of deeper layers.

Unlike conventional CNNs where performance degrades as network depth increases beyond a certain point, ResNets can be substantially deeper (50, 101, or even 152 layers) while maintaining or improving accuracy. The key innovation is the residual block structure, which learns residual mappings instead of direct mappings, making optimization easier. This allows the network to decide whether to use or skip certain layers during training, effectively creating an ensemble of networks with different depths.

For our trash classification task, this architecture provides superior feature extraction capabilities, capturing both fine-grained details and higher-level abstractions necessary for distinguishing between various waste materials.

Key Features

  • Architecture: ResNet with Bottleneck blocks
  • Implementation: Built using TinyGrad for efficient training
  • Structure:
    • Initial 7×7 convolution with stride 2
    • Four residual layers with bottleneck blocks
    • Global average pooling
    • Fully connected layer for classification
  • Residual Learning: Uses skip connections to address the vanishing gradient problem
  • Configuration:
    • Input size: 224×224×3 (RGB images)
    • Output: 3 classes (Compostable, Non-recyclable, Recyclable)

Training Process

  • Optimizer: SGD with momentum (0.9)
  • Learning Rate: 0.001
  • Batch Size: Variable (configurable)
  • Metrics: Accuracy, Precision, Recall, F1-score

Performance Evaluation

The model is evaluated on a held-out test set with comprehensive metrics to ensure robust classification across all waste categories.