Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask

Add link to paper and Github repository

#3
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +8 -6
README.md CHANGED
@@ -1,16 +1,17 @@
1
  ---
 
 
 
 
 
2
  configs:
3
  - config_name: default
4
  data_files:
5
  - split: train
6
  path:
7
  - '*/documents/*.gz'
8
- task_categories:
9
- - text-generation
10
- language:
11
- - en
12
- pretty_name: StackExchange
13
  ---
 
14
  # Stack Exchange
15
 
16
  ## Description
@@ -26,6 +27,8 @@ PyMarkdown was used to convert each comment into plain text.
26
  Per-document license information is available in the `license` entry of the `metadata` field of each example.
27
  Code for collecting, processing, and preparing this dataset is available in the [common-pile GitHub repo](https://github.com/r-three/common-pile).
28
 
 
 
29
  ## Dataset Statistics
30
  | Documents | UTF-8 GB |
31
  |-----------|----------|
@@ -34,7 +37,6 @@ Code for collecting, processing, and preparing this dataset is available in the
34
  ## License Issues
35
  While we aim to produce datasets with completely accurate licensing information, license laundering and inaccurate metadata can cause us to erroneously assign the incorrect license to some documents (for further discussion of this limitation, please see [our paper](https://huggingface.co/papers/2506.05209)). If you believe you have found an instance of incorrect licensing in this dataset, please [start a discussion](https://github.com/r-three/common-pile/discussions/new) on this repository.
36
 
37
-
38
  ## Other Versions
39
  This is the "raw" version of the StackExchange dataset.
40
  If you are looking for the filtered version used to train [Comma v0.1](https://huggingface.co/common-pile/comma-v0.1), you can find it [here](https://huggingface.co/datasets/common-pile/stackexchange_filtered).
 
1
  ---
2
+ language:
3
+ - en
4
+ task_categories:
5
+ - text-generation
6
+ pretty_name: StackExchange
7
  configs:
8
  - config_name: default
9
  data_files:
10
  - split: train
11
  path:
12
  - '*/documents/*.gz'
 
 
 
 
 
13
  ---
14
+
15
  # Stack Exchange
16
 
17
  ## Description
 
27
  Per-document license information is available in the `license` entry of the `metadata` field of each example.
28
  Code for collecting, processing, and preparing this dataset is available in the [common-pile GitHub repo](https://github.com/r-three/common-pile).
29
 
30
+ [The Common Pile v0.1 paper](https://huggingface.co/papers/2506.05209)
31
+
32
  ## Dataset Statistics
33
  | Documents | UTF-8 GB |
34
  |-----------|----------|
 
37
  ## License Issues
38
  While we aim to produce datasets with completely accurate licensing information, license laundering and inaccurate metadata can cause us to erroneously assign the incorrect license to some documents (for further discussion of this limitation, please see [our paper](https://huggingface.co/papers/2506.05209)). If you believe you have found an instance of incorrect licensing in this dataset, please [start a discussion](https://github.com/r-three/common-pile/discussions/new) on this repository.
39
 
 
40
  ## Other Versions
41
  This is the "raw" version of the StackExchange dataset.
42
  If you are looking for the filtered version used to train [Comma v0.1](https://huggingface.co/common-pile/comma-v0.1), you can find it [here](https://huggingface.co/datasets/common-pile/stackexchange_filtered).