metadata
license: apache-2.0
language:
- en
datasets:
- AiresPucrs/CelebA-Smiles
metrics:
- accuracy
tags:
- image-classification
LeNNon Smile Detector (Teeny-Tiny Castle)
This model is part of a tutorial tied to the Teeny-Tiny Castle, an open-source repository containing educational tools for AI Ethics and Safety research.
How to Use
import torch
from PIL import Image
from lennon import LeNNon
from torchvision import transforms
from huggingface_hub import hf_hub_download
# Download the pytorch model
hf_hub_download(repo_id="AiresPucrs/LeNNon-Smile-Detector",
filename="LeNNon-Smile-Detector.pt",
local_dir="./",
repo_type="model"
)
# Download the source implementation of the model's architecture
hf_hub_download(repo_id="AiresPucrs/LeNNon-Smile-Detector",
filename="lennon.py",
local_dir="./",
repo_type="model"
)
# Check if GPU is available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load the model an pass it to the proper device
model = torch.load('./LeNNon-Smile-Detector.pt')
model = model.to(device)
model.eval()
# This `transform` object will transform our test images into proper tensors
transform = transforms.Compose([
transforms.Resize((100, 100)), # Resize the image to 100x100
transforms.ToTensor(),
])
image_path = "your_image_path_here"
# Open and preprocess he image
image = Image.open(image_path)
tensor = transform(image)
tensor = tensor.to(device)
# forward pass trough the model
with torch.no_grad():
outputs = model(tensor)
# Get the class prediction
_, predicted = torch.max(outputs.data, 1)
print("Smiling" if predicted.item() > 0 else "Not Smiling")