patrickvonplaten
commited on
Commit
·
5ca1e5b
1
Parent(s):
e9d259a
Update README.md
Browse files
README.md
CHANGED
@@ -1 +1,24 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
The dataset script is more or less ready and one file has correctly been converted so far: `https://opendata.iisys.de/systemintegration/Datasets/CommonCrawl/head/de_head_0000_2015-48.tar.gz`
|
2 |
+
|
3 |
+
You can try downloading the file as follows:
|
4 |
+
|
5 |
+
```python
|
6 |
+
from datasets import load_dataset
|
7 |
+
ds = load_dataset("flax-community/german_common_crawl", "first")
|
8 |
+
```
|
9 |
+
|
10 |
+
This can be done on your local computer and should only take around 2GB of disk space.
|
11 |
+
|
12 |
+
This however only loads the first of >100 files.
|
13 |
+
|
14 |
+
We now need to add **all** other files to this repo. This can be done as follows:
|
15 |
+
|
16 |
+
1) Clone this repo (assuming `git lfs` is installed): `git clone https://huggingface.co/datasets/flax-community/german_common_crawl`
|
17 |
+
2) For each file:
|
18 |
+
`https://opendata.iisys.de/systemintegration/Datasets/CommonCrawl/head/de_head_0000_2016-18.tar.gz` - `https://opendata.iisys.de/systemintegration/Datasets/CommonCrawl/middle/de_middle_0009_2019-47.tar.gz`
|
19 |
+
|
20 |
+
run the command `./convert_file.sh <file_name>` This command will download the file via `wget`, filter out all text that is below a threshold as explained here: https://opendata.iisys.de/systemintegration/Datasets/CommonCrawl/middle/de_middle_0009_2019-47.tar.gz and then converts the file into the correct format.
|
21 |
+
3) Upload the file to this repo:
|
22 |
+
`git add . && git commit -m "add file x" && git push
|
23 |
+
|
24 |
+
Ideally this can be done in a loop on a computer that has enough CPU memory (Note that if this is done on a TPU VM, make sure to disable the TPU via `export JAX_PLATFORM_NAME=cpu`.
|