xlm-roberta model trained on DaNe, performing 97.1 f1-Macro on test set.

Test metric Results
test_f1_mac_dane_ner 0.9713183641433716
test_loss_dane_ner 0.11384682357311249
test_prec_mac_dane_ner 0.8712055087089539
test_rec_mac_dane_ner 0.8684446811676025
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline

tokenizer = AutoTokenizer.from_pretrained("EvanD/xlm-roberta-base-danish-ner-daner")
ner_model = AutoModelForTokenClassification.from_pretrained("EvanD/xlm-roberta-base-danish-ner-daner")

nlp = pipeline("ner", model=ner_model, tokenizer=tokenizer, aggregation_strategy="simple")
example = "Mit navn er Amadeus Wolfgang, og jeg bor i Berlin"

ner_results = nlp(example)
print(ner_results)
Downloads last month
35
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.