wiki-sample / README.md
librarian-bot's picture
Librarian Bot: Add language metadata for dataset
e8f873c verified
|
raw
history blame
2.3 kB
metadata
language:
  - en
license: bsd-3-clause
size_categories:
  - 100K<n<1M
configs:
  - config_name: no-vectors
    data_files: no-vectors/*.parquet
    default: true
  - config_name: openai-text-embedding-3-small
    data_files: openai/text-embedding-3-small/*.parquet
  - config_name: openai-text-embedding-3-large
    data_files: openai/text-embedding-3-large/*.parquet
  - config_name: snowflake-arctic-embed
    data_files: ollama/snowflake-arctic/*.parquet

Loading dataset without vector embeddings

You can load the raw dataset without vectors, like this:

from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", split="train", streaming=True)

Loading dataset with vector embeddings

You can also load the dataset with vectors, like this:

from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", "openai-text-embedding-3-small", split="train", streaming=True)
# dataset = load_dataset("weaviate/wiki-sample", "snowflake-arctic-embed", split="train", streaming=True)

for item in dataset:
    print(item["text"])
    print(item["title"])
    print(item["url"])
    print(item["wiki_id"])
    print(item["vector"])
    print()

Supported Datasets

Data only - no vectors

from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", "no-vectors", split="train", streaming=True)

You can also skip the config name, as "no-vectors is the default dataset:

from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", split="train", streaming=True)

OpenAI

text-embedding-3-small - 1536d vectors - generated with OpenAI

from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", "openai-text-embedding-3-small", split="train", streaming=True)

text-embedding-3-large - 3072d vectors - generated with OpenAI

from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", "openai-text-embedding-3-large", split="train", streaming=True)

Snowflake

snowflake-arctic-embed - 1024 vectors - generated with Ollama

from datasets import load_dataset
dataset = load_dataset("weaviate/wiki-sample", "snowflake-arctic-embed", split="train", streaming=True)