This BERT was fined-tuned on +400k nuclear energy data from twitter/X. The classification accuracy obtained is 96%.
The number of labels is 3: {0: Negative, 1: Neutral, 2: Positive}

This is an example to use it

from transformers import AutoTokenizer
from transformers import pipeline
from transformers import AutoModelForSequenceClassification
import torch

checkpoint = 'kumo24/bert-sentiment-nuclear'
tokenizer=AutoTokenizer.from_pretrained(checkpoint)
id2label = {0: "negative", 1: "neutral", 2: "positive"}
label2id = {"negative": 0, "neutral": 1, "positive": 2}
    

if tokenizer.pad_token is None:
    tokenizer.add_special_tokens({'pad_token': '[PAD]'})

model = AutoModelForSequenceClassification.from_pretrained(checkpoint, 
                                                       num_labels=3,
                                                       id2label=id2label, 
                                                       label2id=label2id)

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)


sentiment_task = pipeline("sentiment-analysis", 
                          model=model, 
                          tokenizer=tokenizer)

print(sentiment_task("Michigan Wolverines are Champions, Go Blue!"))
Downloads last month
355
Safetensors
Model size
109M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support