--- dataset_info: features: - name: text dtype: string - name: ents list: - name: end dtype: int64 - name: label dtype: string - name: start dtype: int64 - name: sents list: - name: end dtype: int64 - name: start dtype: int64 - name: tokens list: - name: dep dtype: string - name: end dtype: int64 - name: head dtype: int64 - name: id dtype: int64 - name: lemma dtype: string - name: morph dtype: string - name: pos dtype: string - name: start dtype: int64 - name: tag dtype: string splits: - name: train num_bytes: 7886693 num_examples: 4383 - name: dev num_bytes: 1016350 num_examples: 564 - name: test num_bytes: 991137 num_examples: 565 download_size: 1627548 dataset_size: 9894180 --- # DaNE+ This is the annotated version of DaNE, where the annotations are added by the model predictions using model trained of Danish dataset DANSK and then all the discrepancies are manually reviewed and corrected by Kenneth C. Enevoldsen. If there was an uncertainty the annotation was left as it was. ## Process of annotation 1) Install the requirements: ``` --extra-index-url pip install prodigy -f https://{DOWNLOAD KEY}@download.prodi.gy prodigy>=1.11.0,<2.0.0 ``` 2) Create outline dataset ```bash python annotate.py ``` 3) Review and correction annotation using prodigy: Add datasets to prodigy ```bash prodigy db-in dane reference.jsonl prodigy db-in dane_plus_mdl_pred predictions.jsonl ``` Run review using prodigy: ```bash prodigy review daneplus dane_plus_mdl_pred,dane --view-id ner_manual --l NORP,CARDINAL,PRODUCT,ORGANIZATION,PERSON,WORK_OF_ART,EVENT,LAW,QUANTITY,DATE,TIME,ORDINAL,LOCATION,GPE,MONEY,PERCENT,FACILITY ``` Export the dataset: ```bash prodigy data-to-spacy daneplus --ner daneplus --lang da -es 0 ``` 4) Redo the original split: ```bash python split.py ```