Uniref50 / README.md
zhangzhi's picture
Update README.md
28f3655 verified
metadata
license: apache-2.0
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: valid
        path: data/valid-*
      - split: test
        path: data/test-*
dataset_info:
  features:
    - name: sequence
      dtype: string
    - name: length
      dtype: int64
  splits:
    - name: train
      num_bytes: 12071713186
      num_examples: 41546293
    - name: valid
      num_bytes: 24293086
      num_examples: 82929
    - name: test
      num_bytes: 19981814
      num_examples: 48941
  download_size: 11690105266
  dataset_size: 12115988086

Uniref50: Uniref Sequences clustered at 50% sequence identity

  • ~40M Protein Sequences.
  • Split into train val and test.

Usage

from datasets import load_dataset

# Step 1: Load the dataset from HuggingFace Hub
dataset = load_dataset("zhangzhi/Uniref50")

# Step 2: Access a specific split (e.g., "train", "validation", "test")
train_split = dataset["train"]
print(f"Number of sequences in the train split: {len(train_split)}")