|
--- |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: title |
|
dtype: string |
|
- name: ingredients |
|
dtype: string |
|
- name: directions |
|
dtype: string |
|
- name: link |
|
dtype: string |
|
- name: source |
|
dtype: string |
|
- name: NER |
|
sequence: string |
|
- name: metadata |
|
struct: |
|
- name: NER |
|
sequence: string |
|
- name: title |
|
dtype: string |
|
- name: document |
|
dtype: string |
|
- name: all-MiniLM-L6-v2 |
|
sequence: float32 |
|
- name: bm42-all-minilm-l6-v2-attentions |
|
struct: |
|
- name: indices |
|
sequence: int64 |
|
- name: values |
|
sequence: float64 |
|
splits: |
|
- name: train |
|
num_bytes: 1176543723 |
|
num_examples: 350000 |
|
download_size: 1101274243 |
|
dataset_size: 1176543723 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
|
|
# Recipe Short - Dense and Sparse Embeddings Dataset |
|
|
|
This dataset is based on the [rk404/recipe_short](https://huggingface.co/datasets/rk404/recipe_short) dataset, which itself is derived from the [RecipeNLG](https://recipenlg.cs.put.poznan.pl/) dataset. RecipeNLG is a large-scale, high-quality dataset designed for natural language generation tasks in the culinary domain. This dataset includes dense and sparse embeddings for each recipe, generated using the following models: |
|
|
|
1. **Dense Embeddings**: Created using the `sentence-transformers/all-MiniLM-L6-v2` model with `fastembed` library. |
|
2. **Sparse Embeddings**: Generated using the `Qdrant/bm25-all-minilm-l6-v2-attentions` model with `fastembed` library. |
|
|
|
The embeddings were computed using GPU resources on Kaggle for efficient processing. This dataset is intended for tasks related to text similarity, search, and semantic information retrieval within recipe-related content. |
|
|
|
|
|
### Sparse Embedding Model Reference |
|
|
|
Sparse vector embedding model focuses on capturing the most important tokens from the text. It provides attention-based scores to highlight key terms, which can be beneficial for keyword-based search and sparse retrieval tasks. |
|
|
|
You can find more about sparse embedding [here](https://qdrant.tech/articles/bm42/#:~:text=Despite%20all%20of%20its%20advantages,%20BM42) and [here](https://github.com/qdrant/bm42_eval/) |
|
|
|
### Generation Code |
|
|
|
[recipe-short-embeddings-gpu.ipynb](https://huggingface.co/datasets/otacilio-psf/recipe_short_dense_and_sparse_embeddings/blob/main/recipe-short-embeddings-gpu.ipynb) |