Datasets:
Initial readme
Browse files
README.md
CHANGED
@@ -1,3 +1,56 @@
|
|
1 |
-
---
|
2 |
-
license: cc-by-sa-4.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-sa-4.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
size_categories:
|
6 |
+
- 10B<n<100B
|
7 |
+
---
|
8 |
+
|
9 |
+
# Dataset Summary
|
10 |
+
Paragraph embeddings for every article in English Wikipedia (not the Simple English version). Based on wikimedia/wikipedia, 20231101.en.
|
11 |
+
|
12 |
+
Embeddings were generated with avsolatorio/GIST-small-Embedding-v0 and are quantized to int8.
|
13 |
+
|
14 |
+
You can load the data with the following:
|
15 |
+
|
16 |
+
```
|
17 |
+
from datasets import load_dataset
|
18 |
+
|
19 |
+
ds = load_dataset(path="Abrak/wikipedia-paragraph-embeddings-en-gist-complete", data-dir="20231101.en")
|
20 |
+
```
|
21 |
+
# Dataset Structure
|
22 |
+
|
23 |
+
The structure of the dataset is designed to use minimal storage space.
|
24 |
+
## Data instances
|
25 |
+
|
26 |
+
An example looks as follows:
|
27 |
+
```
|
28 |
+
{ 'id': '12.1',
|
29 |
+
'embedding': [[10, -14, -42, -3, 5, 4, 7, 17, -8, 18, ...]
|
30 |
+
}
|
31 |
+
```
|
32 |
+
## Data Fields
|
33 |
+
The data fields are the same for all records:
|
34 |
+
* `id(str)`: The ID of the same article in wikimedia/wikipedia, '.' as a separator, and the sequential number
|
35 |
+
of the paragraphs in the article. These are not left-padded.
|
36 |
+
* `embedding`: A list of 384 int8 values (from -128 to 127)
|
37 |
+
|
38 |
+
# Details
|
39 |
+
## Source Data
|
40 |
+
The data is sourced directly from the wikimedia/wikipedia dataset, in the 20231101.en directory.
|
41 |
+
This is English-language article text content, stripped of formatting and other content that is not language.
|
42 |
+
See the [wikimedia/wikipedia model card](https://huggingface.co/datasets/wikimedia/wikipedia) for more information.
|
43 |
+
|
44 |
+
Article text was split into paragraphs on two newlines (`\n\n`).
|
45 |
+
## Embedding Calculation
|
46 |
+
Embeddings were calculated in batches of 1300 paragraphs with sentence_transformers and the unquantized
|
47 |
+
GIST-small-Embedding-v0 model. Precision was set to int8. Complete processing took about 20 hours on an
|
48 |
+
Nvidia A40. The full calculation code used is in
|
49 |
+
[commit 5132104f1fa59d9b212844f6f7a93232193958f2 of setup.py](https://github.com/abrakjamson/The-Archive/commit/5132104f1fa59d9b212844f6f7a93232193958f2)
|
50 |
+
in the Github repo for my project, [The Archive](https://github.com/abrakjamson/The-Archive).
|
51 |
+
|
52 |
+
# Licensing information
|
53 |
+
These embeddings are a derivative of Wikipedia article text, which is under [CC-BY-SA-4.0](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_Creative_Commons_Attribution-ShareAlike_4.0_International_License),
|
54 |
+
a copyleft license, as well as [GFDL](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License).
|
55 |
+
These embeddings inherit the same licenses. See the [Wikipedia Copyrights page](https://en.wikipedia.org/wiki/Wikipedia:Copyrights)
|
56 |
+
for details.
|