Datasets:
image
imagewidth (px) 106
7.14k
| label
class label 100
classes |
---|---|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
0all_purpose_flour_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
1almonds_annotated
|
|
2apple_annotated
|
"Eyes on Eats," aims to address the challenge many individuals face when they are unsure of what to cook with the ingredients available. This uncertainty often leads to wasted time contemplating meal options or even unnecessary spending on ordering food. "Eyes on Eats" offers a solution by employing advanced deep learning techniques in object detection and text generation. By analyzing images of ingredients, the system generates personalized recipes tailored to the user's available ingredients. This innovative approach not only streamlines the cooking process but also encourages culinary creativity. With "Eyes on Eats," users can confidently embark on their culinary journey without the stress of meal planning, ultimately saving time and potentially reducing unnecessary expenses.
Objectives:
- Develop a robust object detection model capable of accurately identifying various ingredients depicted in images.
- Implement an efficient text generation model to seamlessly translate detected ingredients into personalized recipe recommendations.
- Ensure the scalability and adaptability of the system to accommodate a wide range of ingredients and recipes.
Datasets
We need two types of data, for this project one going to be the image data of the ingredients to train the model on the object detection and we need textual data second we need the to train the other model to generate the recipe according to the depicted ingredients.
Object Detection Data
We explored various ingredients datasets around the internet and they lack the requirements we need, and very short for training the complex model, so we manually do the web scraping using the Bing image downloader, but we noticed there’s inconsistency in the image formats and the bounded by the limitation of which we can only download one class at a time. So we modified it for our requirements and scrape 100 classes of images utilizing this.
Here's the list of ingredients we are going to using the tool:
all_purpose_flour | almonds | apple | apricot | asparagus |
---|---|---|---|---|
avocado | bacon | banana | barley | basil |
basmati_rice | beans | beef | beets | bell_pepper |
berries | biscuits | blackberries | black_pepper | blueberries |
bread | bread_crumbs | bread_flour | broccoli | brownie_mix |
brown_rice | butter | cabbage | cake | cardamom |
carrot | cashews | cauliflower | celery | cereal |
cheese | cherries | chicken | chickpeas | chocolate |
chocolate_chips | chocolate_syrup | cilantro | cinnamon | clove |
cocoa_powder | coconut | cookies | corn | cucumber |
dates | eggplant | eggs | fish | garlic |
ginger | grapes | honey | jalapeno | kidney_beans |
lemon | mango | marshmallows | milk | mint |
muffins | mushroom | noodles | nuts | oats |
okra | olive | onion | orange | oreo_cookies |
pasta | pear | pepper | pineapple | pistachios |
pork | potato | pumpkin | radishes | raisins |
red_chilies | rice | rosemary | salmon | salt |
shrimp | spinach | strawberries | sugar | sweet_potato |
tomato | vanilla_ice_cream | walnuts | watermelon | yogurt |
After the scraping the data is stored in a directory in a structured format, where everything is every category is subdivided into separate directories and each containing the images of the respective category. For now we collected the image classification data. But we need the object detection data. Before that we need to clean and verify the collected data.
Correcting the initial data
The class name in the above table has underscore but we cant use such names to scrape the data form the web, as that can lead to less accurate results, so we to scrape the keywords are provided in such a way that it wont effect the data search. As you can see form the given example.
queries = [ "baking powder", "basil", "cereal", "cheese", "chicken"]
for query in queries:
if len(sys.argv) == 3:
filter = sys.argv[2]
else:
filter = ""
downloader.download(
query,
limit=50,
output_dir="dataset_dem",
adult_filter_off=True,
force_replace=False,
timeout=120,
filter=filter,
verbose=True,
)
The above process in the scraping lead to creating the directory names to “baking powder” which can lead to various inconsistences in the further processes. So we created these steps to ensure consistency:
- Convert Spaces in Directory Names to Underscores: Rename directories to replace spaces with underscores to avoid inconsistencies. For example, rename "all purpose flour" to "all_purpose_flour".
Renamed 'all purpose flour' to 'all_purpose_flour'
Renamed 'basmati rice' to 'basmati_rice'
Renamed 'bell pepper' to 'bell_pepper'
Renamed 'black pepper' to 'black_pepper'
- Verify Folder Names Against Class List: Ensure all folder names match exactly with the classes listed in a "Final_classes.txt" file. This step checks for both missing directories and extra directories not listed in the class list.
All classes in 'Final_classes.txt' have corresponding directories in the dataset.
No extra directories in the dataset that are not listed in 'Final_classes.txt'.
- Remove Non-JPG Files: Execute a script to traverse the dataset directories and remove any files that are not in .jpg format. This is crucial for maintaining consistency in the file format across the dataset.
def remove_non_jpg_images(dataset_dir):
removed_files = []
for root, dirs, files in os.walk(dataset_dir):
for file in files:
# Check if the file extension is not .jpg
if not file.lower().endswith('.jpg'):
file_path = os.path.join(root, file)
os.remove(file_path) # Remove the non-JPG file
removed_files.append(file_path)
return removed_files
dataset_dir = r'C:\Users\Kiyo\Desktop\DL\Project\image_data\initial_data'
removed_files = remove_non_jpg_images(dataset_dir)
if removed_files:
print(f"Removed {len(removed_files)} non-JPG files:")
for file in removed_files:
print(file)
else:
print("No non-JPG files found in the dataset.")
- Check for Class Image Count: Ensure that each class directory contains exactly 50 images. If a class has more than 50 images, randomly remove the excess images to limit each class to 50.
all_purpose_flour: 50 images
almonds: 50 images
apple: 50 images
apricot: 50 images
asparagus: 50 images
avocado: 50 images
bacon: 50 images
..
..
- Augment Images for Underrepresented Classes: For classes with fewer than 50 images, perform image augmentation to increase the total to 50 images per class. This ensures uniformity in the number of images across all classes.
Completed augmentation for class 'all_purpose_flour'.
Completed augmentation for class 'almonds'.
Completed augmentation for class 'apple'.
Completed augmentation for class 'apricot'.
Completed augmentation for class 'asparagus'.
Completed augmentation for class 'avocado'.
Completed augmentation for class 'bacon'.
Annotating the object detection data
The dataset is ready, it consists of 100 classes present in the dataset, and each class contains 50 samples. But this is in the image classification format, which means to satisfy the stated objective of the project, The model is to be provided with pictures of one ingredient at a time and, after all the ingredients are collected by the user, with the help of the encoded vectors, it generates recipes through text generation. Which was very inconvenient, but also annotating the image data is very inconvenient. Then we discovered Grounding Dino. A zero-shot object detection model.
Step 1: Check GPU Availability
Use !nvidia-smi
to check if a GPU is available for faster processing.
Step 2: Set Home Directory
Define a HOME
constant to manage datasets, images, and models easily:
import os
HOME = os.getcwd()
print(HOME)
Step 3: Install Grounding DINO
Clone the Grounding DINO repository, switch to a specific feature branch (if necessary), and install the dependencies:
%cd {HOME}
!git clone https://github.com/IDEA-Research/GroundingDINO.git
%cd {HOME}/GroundingDINO
# we use latest Grounding DINO model API that is not official yet
!git checkout feature/more_compact_inference_api
!pip install -q -e .
!pip install -q roboflow dataclasses-json onemetric
Step 4: Additional Dependencies & Verify CUDA and PyTorch
Ensure CUDA and PyTorch are correctly installed and compatible:
import torch
!nvcc --version
TORCH_VERSION = ".".join(torch.__version__.split(".")[:2])
CUDA_VERSION = torch.__version__.split("+")[-1]
print("torch: ", TORCH_VERSION, "; cuda: ", CUDA_VERSION)
import roboflow
import supervision
print(
"roboflow:", roboflow.__version__,
"; supervision:", supervision.__version__
)
# confirm that configuration file exist
import os
CONFIG_PATH = os.path.join(HOME, "GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py")
print(CONFIG_PATH, "; exist:", os.path.isfile(CONFIG_PATH))
Step 5: Download Configuration and Weights
Ensure the configuration file exists within the cloned repository and download the model weights:
# download weights file
%cd {HOME}
!mkdir {HOME}/weights
%cd {HOME}/weights
!wget -q https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth
# confirm that weights file exist
import os
WEIGHTS_PATH = os.path.join(HOME, "weights", "groundingdino_swint_ogc.pth")
print(WEIGHTS_PATH, "; exist:", os.path.isfile(WEIGHTS_PATH))
Step 6: Download and Prepare Your Dataset
If your dataset is zipped in your drive, unzip it to a local directory:
import zipfile
# Path to the zip file
zip_file_path = "/content/drive/MyDrive/....[your file path]"
# Directory to extract the contents of the zip file
extract_dir = "/content/data"
# Unzip the file
with zipfile.ZipFile(zip_file_path, 'r') as zip_ref:
zip_ref.extractall(extract_dir)
print("Extraction complete.")
Step 7: Load the Grounding DINO Model
Load the model using the configuration and weights path:
%cd {HOME}/GroundingDINO
from groundingdino.util.inference import Model
model = Model(model_config_path=CONFIG_PATH, model_checkpoint_path=WEIGHTS_PATH)
Step 8: Annotate Dataset and Save to PASCAL voc
Use the model to annotate images. You can run inference in different modes like caption
, classes
, or enhanced classes
depending on your needs. After inference, use the detections and labels to annotate images using your preferred method or the provided utility functions.
Automate the annotation process for your entire dataset by iterating over your images, running the model to detect objects, and saving both the annotated images and their PASCAL VOC XML files.
import os
import cv2
import xml.etree.ElementTree as ET
from groundingdino.util.inference import Model
from tqdm import tqdm
# Define the home directory and the path to the dataset
HOME = "/content"
DATASET_DIR = os.path.join(HOME, "data", "ingredients_images_dataset")
# Load the Grounding DINO model
MODEL_CONFIG_PATH = os.path.join(HOME, "GroundingDINO", "groundingdino", "config", "GroundingDINO_SwinT_OGC.py")
WEIGHTS_PATH = os.path.join(HOME, "weights", "groundingdino_swint_ogc.pth")
model = Model(model_config_path=MODEL_CONFIG_PATH, model_checkpoint_path=WEIGHTS_PATH)
# Load class labels from the file
LABELS_FILE_PATH = "[ txt file path containing your images labels one per line]"
with open(LABELS_FILE_PATH, "r") as f:
CLASSES = [line.strip() for line in f.readlines()]
# Define annotation thresholds
BOX_THRESHOLD = 0.35
TEXT_THRESHOLD = 0.25
# Function to enhance class names
def enhance_class_name(class_names):
return [f"all {class_name}s" for class_name in class_names]
# Function to create Pascal VOC format XML annotation
def create_pascal_voc_xml(image_filename, image_shape, boxes, labels):
annotation = ET.Element("annotation")
folder = ET.SubElement(annotation, "folder")
folder.text = "ingredient_annotations" # Folder name for annotations
filename = ET.SubElement(annotation, "filename")
filename.text = image_filename
source = ET.SubElement(annotation, "source")
database = ET.SubElement(source, "database")
database.text = "Unknown"
size = ET.SubElement(annotation, "size")
width = ET.SubElement(size, "width")
height = ET.SubElement(size, "height")
depth = ET.SubElement(size, "depth")
width.text = str(image_shape[1])
height.text = str(image_shape[0])
depth.text = str(image_shape[2])
segmented = ET.SubElement(annotation, "segmented")
segmented.text = "0"
for box, label in zip(boxes, labels):
object = ET.SubElement(annotation, "object")
name = ET.SubElement(object, "name")
pose = ET.SubElement(object, "pose")
truncated = ET.SubElement(object, "truncated")
difficult = ET.SubElement(object, "difficult")
bndbox = ET.SubElement(object, "bndbox")
xmin = ET.SubElement(bndbox, "xmin")
ymin = ET.SubElement(bndbox, "ymin")
xmax = ET.SubElement(bndbox, "xmax")
ymax = ET.SubElement(bndbox, "ymax")
name.text = label
pose.text = "Unspecified"
truncated.text = "0"
difficult.text = "0"
xmin.text = str(int(box[0]))
ymin.text = str(int(box[1]))
xmax.text = str(int(box[2]))
ymax.text = str(int(box[3]))
# Format the XML for better readability
xml_string = ET.tostring(annotation, encoding="unicode")
return xml_string
# Function to annotate images in a directory and save annotated images in Pascal VOC format
def annotate_images_in_directory(directory):
for class_name in CLASSES:
class_dir = os.path.join(directory, class_name)
annotated_dir = os.path.join(directory, f"{class_name}_annotated")
os.makedirs(annotated_dir, exist_ok=True)
print("Processing images in directory:", class_dir)
if os.path.isdir(class_dir):
for image_name in tqdm(os.listdir(class_dir)):
image_path = os.path.join(class_dir, image_name)
image = cv2.imread(image_path)
if image is None:
print("Failed to load image:", image_path)
continue
detections = model.predict_with_classes(
image=image,
classes=enhance_class_name([class_name]),
box_threshold=BOX_THRESHOLD,
text_threshold=TEXT_THRESHOLD
)
# Drop potential detections with phrase not part of CLASSES set
detections = detections[detections.class_id != None]
# Drop potential detections with area close to area of the whole image
detections = detections[(detections.area / (image.shape[0] * image.shape[1])) < 0.9]
# Drop potential double detections
detections = detections.with_nms()
# Create the Pascal VOC XML annotation for this image
xml_annotation = create_pascal_voc_xml(image_filename=image_name, image_shape=image.shape, boxes=detections.xyxy, labels=[class_name])
# Save the Pascal VOC XML annotation to a file
xml_filename = os.path.join(annotated_dir, f"{os.path.splitext(image_name)[0]}.xml")
with open(xml_filename, "w") as xml_file:
xml_file.write(xml_annotation)
# Save the annotated image
annotated_image_path = os.path.join(annotated_dir, image_name)
cv2.imwrite(annotated_image_path, image)
# Annotate images in the dataset directory
annotate_images_in_directory(DATASET_DIR)
Now, wr’re use it to automate the process of annotating the dataset in Pascal VOC format. Which will be in the following format, An image that belong to some class and xml file for that respective image.
<annotation>
<folder>ingredient_annotations</folder>
<filename>Image_1.jpg</filename>
<source>
<database>Unknown</database>
</source>
<size>
<width>1920</width>
<height>1280</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
<object>
<name>almonds</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>252</xmin>
<ymin>650</ymin>
<xmax>803</xmax>
<ymax>920</ymax>
</bndbox>
</object>
</annotation>
Verifying the Annotated Data
- Check if all the images are annotated by checking if the image file has the respective xml file, if not we remove and manually create a new annotated sample.
def check_dataset_integrity(dataset_directory):
for class_name in os.listdir(dataset_directory):
class_path = os.path.join(dataset_directory, class_name)
if os.path.isdir(class_path):
jpg_files = set()
xml_files = set()
other_files = set()
# Collect file names for each extension
for file_name in os.listdir(class_path):
if file_name.endswith('.jpg'):
jpg_files.add(os.path.splitext(file_name)[0])
elif file_name.endswith('.xml'):
xml_files.add(os.path.splitext(file_name)[0])
else:
other_files.add(file_name)
# Check for discrepancies
missing_xmls = jpg_files - xml_files
missing_jpgs = xml_files - jpg_files
is_perfect = len(missing_xmls) == 0 and len(missing_jpgs) == 0 and len(other_files) == 0
# Report
print(f"Class '{class_name}':", "Perfect" if is_perfect else "Discrepancies Found")
if missing_xmls:
print(f" Missing XML files for: {', '.join(sorted(missing_xmls))}")
if missing_jpgs:
print(f" Missing JPG files for: {', '.join(sorted(missing_jpgs))}")
if other_files:
print(f" Non-JPG/XML files: {', '.join(sorted(other_files))}")
else:
print(f"'{class_name}' is not a directory. Skipping.")
# Specify the path to the dataset directory
dataset_directory = r'C:\Users\Kiyo\Desktop\DL\Project\image_data\initial_data_annotated'
check_dataset_integrity(dataset_directory)
# Output Sample
Class 'all_purpose_flour_annotated': Perfect
Class 'almonds_annotated': Perfect
Class 'apple_annotated': Perfect
Class 'apricot_annotated': Perfect
Class 'asparagus_annotated': Perfect
- Renamed all the directories containing samples, as you can see the dir names changed after annoateion, they are named as _annotated. Now we remove that so that ensuring that the class name in the text file matches with the dir names.
- Again after these changes we checked if all the images have the respective annotations and dir names matches with the class list text file.
This completes our dataset preparation which is the major part in our project and took lot of time to reach the consistency through various trail and error approaches we created and perfected the dataset for the object detection training.
Text Generation Data
The RecipeNLG dataset, available in RecipeNLG_dataset.csv, encompasses 2,231,142 cooking recipes sourced from RecipeNLG. This extensive dataset, totaling 2.14 GB, contains crucial recipe details such as titles, ingredients, directions, links, sources, and Named Entity Recognition (NER) labels. With label distribution categorized into various ranges and a vast array of unique values, the dataset showcases a diverse and comprehensive collection of cooking recipes. This dataset serves as a valuable resource for training and evaluating models in a multitude of natural language processing tasks, particularly in the context of generating cooking-related text.
Sample
Title | Ingredients | Link | Directions | NER |
---|---|---|---|---|
No-Bake Nut Cookies | ["1 c. firmly packed brown sugar", "1/2 c. evaporated milk", "1/2 tsp. vanilla", "1/2 c. broken nuts... | www.cookbooks.com/Recipe-Details.aspx?id=44874 | ["In a heavy 2-quart saucepan, mix brown sugar, nuts, evaporated milk and butter or margarine.", "St... | ["brown sugar", "milk", "vanilla", "nuts", "butter", "bite size shredded rice biscuits"] |
For training the BART transformer model, we need to prepare the tokenized data. First, the dataset is extracted using the unzip command to access the recipe data. Next, we imported libraries such as pandas, transformers, tqdm, numpy, and TensorFlow are imported.
!unzip '/user/bhanucha/recipe_data.zip' -d '/user/bhanucha/data'
import pandas as pd
from transformers import BartTokenizer
from tqdm import tqdm
import numpy as np
import tensorflow as tf
from transformers import TFBartForConditionalGeneration
import numpy as np
The BART tokenizer is initialized from the pretrained BART model, and if the tokenizer lacks a padding token, it is added. The dataset is then loaded into a pandas DataFrame.
model_checkpoint = "facebook/bart-base"
tokenizer = BartTokenizer.from_pretrained(model_checkpoint)
if tokenizer.pad_token is None:
tokenizer.add_special_tokens({'pad_token': tokenizer.eos_token})
data = pd.read_csv('/user/bhanucha/data/dataset/full_dataset.csv')
Subsequently, the ingredients and directions from each recipe are concatenated into text strings and tokenized using the BART tokenizer. The tokenized data is then processed to ensure consistency in length and format, with the tokenized inputs saved for training.
texts = ["Ingredients: " + row['ingredients'] + " Directions: " + row['directions'] for _, row in data.iterrows()]
tokenized_inputs = []
for texts_text in tqdm(texts, desc="Tokenizing Data"):
tokenized_input = tokenizer(
texts_text,
padding="max_length",
truncation=True,
max_length=512,
return_tensors="np"
)
tokenized_inputs.append(tokenized_input['input_ids'])
train_data = np.concatenate(tokenized_inputs, axis=0)
np.save('/user/bhanucha/train_data.npy', train_data)
- Downloads last month
- 28