Danish BERT Tone for the detection of subjectivity/objectivity

The BERT Tone model detects whether a text (in Danish) is subjective or objective. The model is based on the finetuning of the pretrained Danish BERT model by BotXO.

See the DaNLP documentation for more details.

Here is how to use the model:

from transformers import BertTokenizer, BertForSequenceClassification

model = BertForSequenceClassification.from_pretrained("alexandrainst/da-subjectivivity-classification-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-subjectivivity-classification-base")

Training data

The data used for training come from the Twitter Sentiment and EuroParl sentiment 2 datasets.

Downloads last month
260
Safetensors
Model size
111M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train alexandrainst/da-subjectivivity-classification-base