Datasets:
Tasks:
Image Classification
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
< 1K
ArXiv:
annotations_creators: [] | |
language: en | |
size_categories: | |
- 1K<n<10K | |
task_categories: | |
- image-classification | |
task_ids: [] | |
pretty_name: ImageNet-O | |
tags: | |
- fiftyone | |
- image | |
- image-classification | |
dataset_summary: ' | |
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 2000 samples. | |
## Installation | |
If you haven''t already, install FiftyOne: | |
```bash | |
pip install -U fiftyone | |
``` | |
## Usage | |
```python | |
import fiftyone as fo | |
import fiftyone.utils.huggingface as fouh | |
# Load the dataset | |
# Note: other available arguments include ''max_samples'', etc | |
dataset = fouh.load_from_hub("Voxel51/ImageNet-O") | |
# Launch the App | |
session = fo.launch_app(dataset) | |
``` | |
' | |
# Dataset Card for ImageNet-O | |
![image](ImageNet-O.png) | |
This is a [FiftyOne](https://github.com/voxel51/fiftyone) dataset with 2000 samples. | |
The recipe notebook for creating this dataset can be found [here](https://colab.research.google.com/drive/1ScN-30Q-1ssAwuQYIbZ453h0vo0SAhz8). | |
## Installation | |
If you haven't already, install FiftyOne: | |
```bash | |
pip install -U fiftyone | |
``` | |
## Usage | |
```python | |
import fiftyone as fo | |
import fiftyone.utils.huggingface as fouh | |
# Load the dataset | |
# Note: other available arguments include 'max_samples', etc | |
dataset = fouh.load_from_hub("Voxel51/ImageNet-O") | |
# Launch the App | |
session = fo.launch_app(dataset) | |
``` | |
## Dataset Details | |
### Dataset Description | |
The ImageNet-O dataset consists of images from classes not found in the standard ImageNet-1k dataset. It tests the robustness and out-of-distribution detection capabilities of computer vision models trained on ImageNet-1k. | |
Key points about ImageNet-O: | |
- Contains images from classes distinct from the 1,000 classes in ImageNet-1k | |
- Enables testing model performance on out-of-distribution samples, i.e. images that are semantically different from the training data | |
- Commonly used to evaluate out-of-distribution detection methods for models trained on ImageNet | |
- Reported using the Area Under the Precision-Recall curve (AUPR) metric | |
- Manually annotated, naturally diverse class distribution, and large scale | |
- **Curated by:** Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, Dawn Song | |
- **Shared by:** [Harpreet Sahota](twitter.com/datascienceharp), Hacker-in-Residence at Voxel51 | |
- **Language(s) (NLP):** en | |
- **License:** [MIT License](https://github.com/hendrycks/natural-adv-examples/blob/master/LICENSE) | |
### Dataset Sources [optional] | |
<!-- Provide the basic links for the dataset. --> | |
- **Repository:** https://github.com/hendrycks/natural-adv-examples | |
- **Paper:** https://arxiv.org/abs/1907.07174 | |
## Citation | |
**BibTeX:** | |
```bibtex | |
@article{hendrycks2021nae, | |
title={Natural Adversarial Examples}, | |
author={Dan Hendrycks and Kevin Zhao and Steven Basart and Jacob Steinhardt and Dawn Song}, | |
journal={CVPR}, | |
year={2021} | |
} | |
``` | |