|
--- |
|
dataset_info: |
|
features: |
|
- name: _id |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: openai |
|
sequence: float32 |
|
- name: combined_text |
|
dtype: string |
|
- name: text-embedding-3-large-3072-embedding |
|
sequence: float64 |
|
splits: |
|
- name: train |
|
num_bytes: 31467941307 |
|
num_examples: 1000000 |
|
download_size: 25031220619 |
|
dataset_size: 31467941307 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
license: apache-2.0 |
|
task_categories: |
|
- feature-extraction |
|
language: |
|
- en |
|
pretty_name: OpenAI v3 Large 1M |
|
size_categories: |
|
- 1M<n<10M |
|
--- |
|
|
|
1M OpenAI Embeddings: text-embedding-3-large 3072 dimensions + ada-002 1536 dimensions — parallel dataset |
|
|
|
- Created: February 2024. |
|
- Text used for Embedding: title (string) + text (string) |
|
- Embedding Model: text-embedding-3-large |
|
- This dataset was generated from the first 1M entries of https://huggingface.co/datasets/BeIR/dbpedia-entity, extracted by @KShivendu_ [here](https://huggingface.co/datasets/KShivendu/dbpedia-entities-openai-1M) |
|
|
|
|