yonigozlan's picture
yonigozlan HF staff
Update README.md
dd4f386 verified
metadata
license: cc-by-4.0
task_categories:
  - object-detection
tags:
  - COCO
  - Detection
  - '2017'
pretty_name: COCO detection dataset script
size_categories:
  - 100K<n<1M
dataset_info:
  config_name: '2017'
  features:
    - name: id
      dtype: int64
    - name: objects
      struct:
        - name: bbox_id
          sequence: int64
        - name: category_id
          sequence:
            class_label:
              names:
                '0': N/A
                '1': person
                '2': bicycle
                '3': car
                '4': motorcycle
                '5': airplane
                '6': bus
                '7': train
                '8': truck
                '9': boat
                '10': traffic light
                '11': fire hydrant
                '12': street sign
                '13': stop sign
                '14': parking meter
                '15': bench
                '16': bird
                '17': cat
                '18': dog
                '19': horse
                '20': sheep
                '21': cow
                '22': elephant
                '23': bear
                '24': zebra
                '25': giraffe
                '26': hat
                '27': backpack
                '28': umbrella
                '29': shoe
                '30': eye glasses
                '31': handbag
                '32': tie
                '33': suitcase
                '34': frisbee
                '35': skis
                '36': snowboard
                '37': sports ball
                '38': kite
                '39': baseball bat
                '40': baseball glove
                '41': skateboard
                '42': surfboard
                '43': tennis racket
                '44': bottle
                '45': plate
                '46': wine glass
                '47': cup
                '48': fork
                '49': knife
                '50': spoon
                '51': bowl
                '52': banana
                '53': apple
                '54': sandwich
                '55': orange
                '56': broccoli
                '57': carrot
                '58': hot dog
                '59': pizza
                '60': donut
                '61': cake
                '62': chair
                '63': couch
                '64': potted plant
                '65': bed
                '66': mirror
                '67': dining table
                '68': window
                '69': desk
                '70': toilet
                '71': door
                '72': tv
                '73': laptop
                '74': mouse
                '75': remote
                '76': keyboard
                '77': cell phone
                '78': microwave
                '79': oven
                '80': toaster
                '81': sink
                '82': refrigerator
                '83': blender
                '84': book
                '85': clock
                '86': vase
                '87': scissors
                '88': teddy bear
                '89': hair drier
                '90': toothbrush
        - name: bbox
          sequence:
            sequence: float64
            length: 4
        - name: iscrowd
          sequence: int64
        - name: area
          sequence: float64
    - name: height
      dtype: int64
    - name: width
      dtype: int64
    - name: file_name
      dtype: string
    - name: coco_url
      dtype: string
    - name: image_path
      dtype: string
  splits:
    - name: train
      num_bytes: 87231216
      num_examples: 117266
    - name: validation
      num_bytes: 3692192
      num_examples: 4952
  download_size: 20405354669
  dataset_size: 90923408

Usage

For using the COCO dataset (2017), you need to download it manually first:

wget http://images.cocodataset.org/zips/train2017.zip
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip

Then to load the dataset:

import datasets

COCO_DIR = ...(path to the downloaded dataset directory)...
ds = datasets.load_dataset(
    "yonigozlan/coco_detection_dataset_script",
    "2017",
    data_dir=COCO_DIR,
    trust_remote_code=True,
)

Benchmarking

Here is an example of how to benchmark a 🤗 Transformers object detection model on the validation data of the COCO dataset:

import datasets
import torch
from PIL import Image
from torch.utils.data import DataLoader
from torchmetrics.detection.mean_ap import MeanAveragePrecision
from tqdm import tqdm

from transformers import AutoImageProcessor, AutoModelForObjectDetection

# prepare data
COCO_DIR = ...(path to the downloaded dataset directory)...
ds = datasets.load_dataset(
    "yonigozlan/coco_detection_dataset_script",
    "2017",
    data_dir=COCO_DIR,
    trust_remote_code=True,
)
val_data = ds["validation"]
categories = val_data.features["objects"]["category_id"].feature.names
id2label = {index: x for index, x in enumerate(categories, start=0)}
label2id = {v: k for k, v in id2label.items()}
checkpoint = "facebook/detr-resnet-50"

# load model and processor
model = AutoModelForObjectDetection.from_pretrained(
    checkpoint, torch_dtype=torch.float16
).to("cuda")
id2label_model = model.config.id2label
processor = AutoImageProcessor.from_pretrained(checkpoint)


def collate_fn(batch):
    data = {}
    images = [Image.open(x["image_path"]).convert("RGB") for x in batch]
    data["images"] = images
    annotations = []
    for x in batch:
        boxes = x["objects"]["bbox"]
        # convert to xyxy format
        boxes = [[box[0], box[1], box[0] + box[2], box[1] + box[3]] for box in boxes]
        labels = x["objects"]["category_id"]
        boxes = torch.tensor(boxes)
        labels = torch.tensor(labels)
        annotations.append({"boxes": boxes, "labels": labels})
    data["original_size"] = [(x["height"], x["width"]) for x in batch]
    data["annotations"] = annotations
    return data


# prepare dataloader
dataloader = DataLoader(val_data, batch_size=8, collate_fn=collate_fn)

# prepare metric
metric = MeanAveragePrecision(box_format="xyxy", class_metrics=True)

# evaluation loop
for i, batch in tqdm(enumerate(dataloader), total=len(dataloader)):
    inputs = (
        processor(batch["images"], return_tensors="pt").to("cuda").to(torch.float16)
    )
    with torch.no_grad():
        outputs = model(**inputs)
    target_sizes = torch.tensor([x for x in batch["original_size"]]).to("cuda")
    results = processor.post_process_object_detection(
        outputs, threshold=0.0, target_sizes=target_sizes
    )

    # convert predicted label id to dataset label id
    if len(id2label_model) != len(id2label):
        for result in results:
            result["labels"] = torch.tensor(
                [label2id.get(id2label_model[x.item()], 0) for x in result["labels"]]
            )
    # put results back to cpu
    for result in results:
        for k, v in result.items():
            if isinstance(v, torch.Tensor):
                result[k] = v.to("cpu")
    metric.update(results, batch["annotations"])

metrics = metric.compute()
print(metrics)