readme: some more additions
Browse files
README.md
CHANGED
@@ -83,4 +83,17 @@ Note: we include a `-DOCSTART-` marker to e.g. allow document-level features for
|
|
83 |
|
84 |
## Dataset Splits
|
85 |
|
86 |
-
For training powerful NER models on the dataset, we manually splitted the dataset into training, development and test splits.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
83 |
|
84 |
## Dataset Splits
|
85 |
|
86 |
+
For training powerful NER models on the dataset, we manually document-splitted the dataset into training, development and test splits.
|
87 |
+
|
88 |
+
The training split consists of 73 documents, development split of 13 documents and test split of 14 documents.
|
89 |
+
|
90 |
+
We perform dehyphenation as one and only preprocessing step. The final dataset splits can be found in the `splits` folder of this dataset repository.
|
91 |
+
|
92 |
+
# Release Cycles
|
93 |
+
|
94 |
+
We plan to release new updated versions of this dataset on a regular basis (e.g. monthly).
|
95 |
+
For now, we want to collect some feedback about the dataset first, so we use `v0` as current version.
|
96 |
+
|
97 |
+
# License
|
98 |
+
|
99 |
+
Dataset is (currently) licenced under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
|