The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Usage

For using the COCO dataset (2017), you need to download it manually first:

wget http://images.cocodataset.org/zips/train2017.zip
wget http://images.cocodataset.org/zips/val2017.zip
wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip

Then to load the dataset:

import datasets

COCO_DIR = ...(path to the downloaded dataset directory)...
ds = datasets.load_dataset(
    "yonigozlan/coco_detection_dataset_script",
    "2017",
    data_dir=COCO_DIR,
    trust_remote_code=True,
)

Benchmarking

Here is an example of how to benchmark a 🤗 Transformers object detection model on the validation data of the COCO dataset:

import datasets
import torch
from PIL import Image
from torch.utils.data import DataLoader
from torchmetrics.detection.mean_ap import MeanAveragePrecision
from tqdm import tqdm

from transformers import AutoImageProcessor, AutoModelForObjectDetection

# prepare data
COCO_DIR = ...(path to the downloaded dataset directory)...
ds = datasets.load_dataset(
    "yonigozlan/coco_detection_dataset_script",
    "2017",
    data_dir=COCO_DIR,
    trust_remote_code=True,
)
val_data = ds["validation"]
categories = val_data.features["objects"]["category_id"].feature.names
id2label = {index: x for index, x in enumerate(categories, start=0)}
label2id = {v: k for k, v in id2label.items()}
checkpoint = "facebook/detr-resnet-50"

# load model and processor
model = AutoModelForObjectDetection.from_pretrained(
    checkpoint, torch_dtype=torch.float16
).to("cuda")
id2label_model = model.config.id2label
processor = AutoImageProcessor.from_pretrained(checkpoint)


def collate_fn(batch):
    data = {}
    images = [Image.open(x["image_path"]).convert("RGB") for x in batch]
    data["images"] = images
    annotations = []
    for x in batch:
        boxes = x["objects"]["bbox"]
        # convert to xyxy format
        boxes = [[box[0], box[1], box[0] + box[2], box[1] + box[3]] for box in boxes]
        labels = x["objects"]["category_id"]
        boxes = torch.tensor(boxes)
        labels = torch.tensor(labels)
        annotations.append({"boxes": boxes, "labels": labels})
    data["original_size"] = [(x["height"], x["width"]) for x in batch]
    data["annotations"] = annotations
    return data


# prepare dataloader
dataloader = DataLoader(val_data, batch_size=8, collate_fn=collate_fn)

# prepare metric
metric = MeanAveragePrecision(box_format="xyxy", class_metrics=True)

# evaluation loop
for i, batch in tqdm(enumerate(dataloader), total=len(dataloader)):
    inputs = (
        processor(batch["images"], return_tensors="pt").to("cuda").to(torch.float16)
    )
    with torch.no_grad():
        outputs = model(**inputs)
    target_sizes = torch.tensor([x for x in batch["original_size"]]).to("cuda")
    results = processor.post_process_object_detection(
        outputs, threshold=0.0, target_sizes=target_sizes
    )

    # convert predicted label id to dataset label id
    if len(id2label_model) != len(id2label):
        for result in results:
            result["labels"] = torch.tensor(
                [label2id.get(id2label_model[x.item()], 0) for x in result["labels"]]
            )
    # put results back to cpu
    for result in results:
        for k, v in result.items():
            if isinstance(v, torch.Tensor):
                result[k] = v.to("cpu")
    metric.update(results, batch["annotations"])

metrics = metric.compute()
print(metrics)
Downloads last month
180