bees-internal / README.md
pszemraj's picture
update token counts
ca3520e verified
metadata
language:
  - en
license: apache-2.0
size_categories:
  - 1K<n<10K
task_categories:
  - text-generation
  - fill-mask
  - feature-extraction
configs:
  - config_name: abj-fulltext
    data_files:
      - split: train
        path: abj-fulltext/train-*
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
  - config_name: embeddings-jina-sm
    data_files:
      - split: train
        path: embeddings-jina-sm/train-*
      - split: validation
        path: embeddings-jina-sm/validation-*
      - split: test
        path: embeddings-jina-sm/test-*
  - config_name: embeddings-text-nomic_text_v1
    data_files:
      - split: train
        path: embeddings-text-nomic_text_v1/train-*
dataset_info:
  - config_name: abj-fulltext
    features:
      - name: relative_path
        dtype: string
      - name: section
        dtype: string
      - name: filename
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 67883530
        num_examples: 53
    download_size: 37147931
    dataset_size: 67883530
  - config_name: default
    features:
      - name: relative_path
        dtype: string
      - name: section
        dtype: string
      - name: filename
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: train
        num_bytes: 190183849.4385724
        num_examples: 1384
      - name: validation
        num_bytes: 4946978.742621826
        num_examples: 36
      - name: test
        num_bytes: 5084394.818805765
        num_examples: 37
    download_size: 115721385
    dataset_size: 200215223
  - config_name: embeddings-jina-sm
    features:
      - name: relative_path
        dtype: string
      - name: section
        dtype: string
      - name: filename
        dtype: string
      - name: text
        dtype: string
      - name: embedding
        sequence: float64
    splits:
      - name: train
        num_bytes: 133288341
        num_examples: 1254
      - name: validation
        num_bytes: 4916417
        num_examples: 33
      - name: test
        num_bytes: 2822239
        num_examples: 33
    download_size: 84812247
    dataset_size: 141026997
  - config_name: embeddings-text-nomic_text_v1
    features:
      - name: relative_path
        dtype: string
      - name: section
        dtype: string
      - name: filename
        dtype: string
      - name: text
        dtype: string
      - name: text-embedding
        sequence: float64
    splits:
      - name: train
        num_bytes: 135856533
        num_examples: 1254
    download_size: 82483500
    dataset_size: 135856533
thumbnail: https://i.ibb.co/DCjs6R2/bessinternal.png
extra_gated_prompt: >-
  By accessing this dataset, you agree to use it responsibly and ethically. You
  agree not to use the dataset for any form of bioterrorism, to harm bees or
  other pollinators, to disrupt ecosystems, or to commit any act that negatively
  impacts biodiversity or public health. You also agree not to use this dataset
  to develop technologies or conduct experiments that could cause harm to
  humans, animals, or the environment.
extra_gated_fields:
  I agree to not use the dataset for the development, research, or deployment of autonomous weapons or harmful biological agents: checkbox
  I want to use this dataset for:
    type: select
    options:
      - Research
      - Education
      - Conservation Efforts
      - Enlightenment
      - label: Other (please specify)
        value: other
extra_gated_heading: Commit to Ethical Use of the Apicultural Data
extra_gated_button_content: I love bees
tags:
  - bees
  - biology
  - beekeeping

Dataset Card for "bees-internal"

Full length OCR of Bee material and other Lore. Documents are split into multiple chunks if they contain more than 0.5 MB of text, to avoid destroying the CPU during tokenization.

counts

all counts are for the default config.

tokens

GPT-4 tiktoken token count:

         token_count
count    1384.000000
mean    32776.423410
std     33652.185553
min       215.000000
25%      3514.000000
50%      8942.000000
75%     65717.750000
max    160077.000000
  • Total count: 45.36 M tokens

meta-llama/Meta-Llama-3-8B token count:

         token_count
count    1384.000000
mean    31807.650289
std     32651.250770
min       212.000000
25%      3412.750000
50%      8733.500000
75%     64047.000000
max    156381.000000
  • Total count: 44.02 M tokens

input files

INFO: Found 1457 text files - 2024-Feb-20_13-19
INFO: Train size: 1384
Validation size: 36
Test size: 37