norabelrose commited on
Commit
7ed6a2f
1 Parent(s): 88c20de

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -18
README.md CHANGED
@@ -1,21 +1,26 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: raw_content
5
- dtype: string
6
- - name: doc_id
7
- dtype: string
8
- - name: meta
9
- dtype: string
10
- - name: quality_signals
11
- dtype: string
12
- splits:
13
- - name: train
14
- num_bytes: 74731304554
15
- num_examples: 6391639
16
- download_size: 33337342851
17
- dataset_size: 74731304554
18
  ---
19
- # Dataset Card for "rpj-v2-sample"
20
 
21
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ task_categories:
3
+ - text-generation
4
+ language:
5
+ - en
6
+ - de
7
+ - fr
8
+ - es
9
+ - it
 
 
 
 
 
 
 
 
10
  ---
 
11
 
12
+ This is a mirror of the `sample-10B` subset of [RedPajama-Data-V2](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2) which we have re-uploaded in order to resolve issues with the original download script.
13
+
14
+ ### Getting Started
15
+
16
+ RedPajama-V2 is an open dataset for training large language models. The dataset includes over 100B text
17
+ documents coming from 84 CommonCrawl snapshots and processed using
18
+ the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are 30B documents in the corpus
19
+ that additionally come with quality signals. In addition, we also provide the ids of duplicated documents which can be
20
+ used to create a dataset with 20B deduplicated documents.
21
+
22
+ Check out our [blog post](https://together.ai/blog/redpajama-data-v2) for more details on the build process, dataset
23
+ structure and schema.
24
+
25
+ A full set of scripts to recreate the dataset, including the quality signals, can be
26
+ found [here](https://github.com/togethercomputer/RedPajama-Data).