v-mdd-2000 / README.md
eligapris's picture
transformers
f43cb71 verified
metadata
tags:
  - image-classification
  - climate
  - biology
base_model: microsoft/resnet-50
widget:
  - src: >-
      https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
    example_title: Tiger
  - src: >-
      https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
    example_title: Teapot
  - src: >-
      https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
    example_title: Palace
license: apache-2.0
metrics:
  - accuracy
  - bertscore
pipeline_tag: image-classification
library_name: transformers

Model Trained Using AutoTrain

  • Problem type: Image Classification

Image Classification Model Results (AutoTrain)

Validation Metrics

Metric Value
Loss 0.5462
Accuracy 0.7371

F1 Scores

Type Value
Macro 0.3900
Micro 0.7371
Weighted 0.6628

Precision

Type Value
Macro 0.3468
Micro 0.7371
Weighted 0.6320

Recall

Type Value
Macro 0.4972
Micro 0.7371
Weighted 0.7371

How to use

This model is designed for image classification. Here's how you can use it:

from transformers import AutoImageProcessor, AutoModelForImageClassification
import torch
from PIL import Image

model_name = "eligapris/v-mdd-2000"
processor = AutoImageProcessor.from_pretrained(model_name)
model = AutoModelForImageClassification.from_pretrained(model_name)

image = Image.open("path_to_your_image.jpg")
inputs = processor(images=image, return_tensors="pt")

with torch.no_grad():
    outputs = model(**inputs)

logits = outputs.logits
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])