File size: 1,806 Bytes
69c7806
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2732b53
69c7806
 
3a0d614
98b72a5
4297d30
69c7806
3a0d614
4297d30
f8726af
69c7806
3a0d614
4297d30
 
2732b53
69c7806
 
4297d30
69c7806
4297d30
69c7806
4297d30
3b7bb45
 
 
 
84369c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: apache-2.0
tags:
- stripedhyena
- long context
- deep signal processing
- hybrid
- biology
- genomics
task_categories:
  - text-generation
language:
  - en
pretty_name: open-genome
configs:
  - config_name: stage1
    data_files:
      - split: train
        path: 
          - "stage1/gtdb/gtdb_train_shard_*"
          - "stage1/imgpr/imgpr_train.parquet"
      - split: validation
        path: 
          - "stage1/gtdb/gtdb_valid_small.parquet"
          - "stage1/imgpr/imgpr_valid_small.parquet"
      - split: test
        path: 
          - "stage1/gtdb/gtdb_test.parquet"
          - "stage1/imgpr/imgpr_test.parquet"
  - config_name: stage2
    data_files:
      - split: train
        path: "stage2/train_stage2.parquet"
      - split: validation
        path: "stage2/valid_stage2.parquet"
      - split: test
        path: "stage2/test_stage2.parquet"
  - config_name: sample
    data_files:
      - split: validation
        path: "stage2/valid_stage2.parquet"
---



### Dataset organization

The OpenGenome dataset is organized in 2 stages, where stage 1 has context length 8k and stage 2 has context length 131k.  Each stage has their own datasplits.

```
- stage1
  - train
  - validation
  - test

- stage2
  - train
  - validation
  - test
```

### Instructions to download

You can load a dataset using HF's API, with an example below.

```
from datasets import load_dataset

stage1_data = load_dataset("LongSafari/open-genome", 'stage1')

# access just the train data
stage_1_train_data = stage1_data['train']

```

Note: stage 1 training dataset is sharded into separate files due to it's large size.

We also provide a small dataset sample to test out the pipeline if you prefer.

```
sample_data = load_dataset("LongSafari/open-genome", 'sample')['validation']

```