File size: 10,940 Bytes
a00814a 7e197e2 827d9ed 7e197e2 827d9ed 7e197e2 3214e76 7e197e2 bb8f496 c8c0362 bb8f496 c8c0362 7e197e2 01b56ac 7e197e2 01b56ac 7e197e2 01b56ac 7e197e2 01b56ac 7e197e2 01b56ac 7e197e2 548277b 7e197e2 7ce85e5 bb8f496 7ce85e5 bb8f496 7e197e2 bb8f496 7e197e2 bb8f496 7e197e2 bb8f496 7e197e2 bb8f496 7e197e2 bb8f496 7ce85e5 bb8f496 548277b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 |
---
license: cc-by-4.0
language:
- el
metrics:
- f1
pipeline_tag: text-classification
---
# Hellenic Sentiment AI - Version 2.0
![HellenicSentimentAI Logo](https://huggingface.co/gsar78/HellenicSentimentAI/resolve/main/HellenicSentimentAI_logo.png?download=true)
## Model Description
This is the second version of Hellenic Sentiment AI.
Like the first version, this second version of the model is an open-weights only model and designed for both **emotion** and **sentiment** classification of text in Greek language.
The new Emotions classifier, is based on a custom multi-label classification architecture and model which extends the previous version of the model (version 1.1).
18 diverse emotion labels are available for classification:
```Python
emotion_labels = [
'joy', 'trust', 'excitement', 'gratitude', 'hope', 'love', 'pride',
'anger', 'disgust', 'fear', 'sadness', 'anxiety', 'frustration', 'guilt',
'disappointment', 'surprise', 'anticipation', 'neutral'
]
```
The Sentiment polarity labels remain the same as in Version 1.1 of the model.
For reference, these are:
```Python
sentiment_labels = ['negative', 'neutral', 'positive']
```
## Model Details
- **Model Name:** Hellenic Sentiment AI
- **Model Version:** 2.0
- **Language:** Emotion classification: only Greek (Version 2.0), Sentiment polarity: Multilingual (El, En, Fr, It, Es, De, Ar) (Version 1.1)
- **Framework:** Transformers from HuggingFace
- **Max Sequence Length:** 512
- **Base Architecture:** RoBERTa
- **Training Data:** The model (version 2.0) was trained on a custom, curated (Greek language only) dataset of reviews with their respective emotions, comprising human-handpicked reviews from products, places, restaurants, etc., with a specific emphasis on Greek language texts, and labeling of the emotions was performed manually by a human.
## Production readiness
This model is a production-grade sentiment analysis solution, carefully designed and trained to deliver high-performance results in downstream applications. With its robust architecture and rigorous testing, it is ready to be deployed in real-world scenarios, providing accurate and reliable sentiment analysis capabilities for a wide range of use cases.
## Ongoing Improvement
To ensure the model remains at the forefront of sentiment analysis capabilities, it is regularly updated and fine-tuned using new data and techniques.
This commitment to ongoing improvement enables the model to adapt to emerging trends, nuances, and complexities in language, ensuring that it continues to provide exceptional performance and accuracy in production environments.
## Usage:
For simplicity, you can run this here:
[Google Colab](https://colab.research.google.com/drive/1Hr7NCCA3VprpFL8WLpO3lKHQaUlYkF62?usp=sharing)
Alternatively, embed the following code in your application:
```python
import torch
from transformers import AutoTokenizer, AutoConfig,XLMRobertaForSequenceClassification, PreTrainedModel
from torch import nn
from torch.nn import Dropout
# Define the CustomModel class which is predicting Both SENTIMENT POLARITY & EMOTIONS
class CustomModel(XLMRobertaForSequenceClassification):
def __init__(self, config, num_emotion_labels):
super(CustomModel, self).__init__(config)
self.num_emotion_labels = num_emotion_labels
self.dropout_emotion = nn.Dropout(config.hidden_dropout_prob)
self.emotion_classifier = nn.Sequential(
nn.Linear(config.hidden_size, 512),
nn.Mish(),
nn.Dropout(0.3),
nn.Linear(512, num_emotion_labels)
)
self._init_weights(self.emotion_classifier[0])
self._init_weights(self.emotion_classifier[3])
def _init_weights(self, module):
if isinstance(module, nn.Linear):
module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
if module.bias is not None:
module.bias.data.zero_()
def forward(self, input_ids=None, attention_mask=None, sentiment=None, labels=None):
outputs = self.roberta(input_ids=input_ids, attention_mask=attention_mask)
sequence_output = outputs[0]
if len(sequence_output.shape) != 3:
raise ValueError(f"Expected sequence_output to have 3 dimensions, got {sequence_output.shape}")
cls_hidden_states = sequence_output[:, 0, :]
cls_hidden_states = self.dropout_emotion(cls_hidden_states)
emotion_logits = self.emotion_classifier(cls_hidden_states)
with torch.no_grad():
cls_token_state = sequence_output[:, 0, :].unsqueeze(1)
sentiment_logits = self.classifier(cls_token_state).squeeze(1)
if labels is not None:
class_weights = torch.tensor([1.0] * self.num_emotion_labels).to(labels.device)
loss_fct = nn.BCEWithLogitsLoss(pos_weight=class_weights)
loss = loss_fct(emotion_logits, labels)
return {"loss": loss, "emotion_logits": emotion_logits, "sentiment_logits": sentiment_logits}
return {"emotion_logits": emotion_logits, "sentiment_logits": sentiment_logits}
# Load the tokenizer and model from the local directory
model_dir = "gsar78/HellenicSentimentAI_v2"
tokenizer = AutoTokenizer.from_pretrained(model_dir)
config = AutoConfig.from_pretrained(model_dir)
model = CustomModel.from_pretrained(model_dir, config=config, num_emotion_labels=18)
# Function to predict sentiment and emotion
def predict(texts):
# Tokenize the input texts
inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt", max_length=512)
# Move inputs to the same device as the model
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
inputs = {k: v.to(device) for k, v in inputs.items()}
# Ensure the model is on the correct device
model.to(device)
model.eval() # Set the model to evaluation mode
# Clear any gradients
model.zero_grad()
# Get model predictions
with torch.no_grad():
outputs = model(**inputs)
# Extract logits
emotion_logits = outputs["emotion_logits"]
sentiment_logits = outputs["sentiment_logits"]
# Convert logits to probabilities
emotion_probs = torch.sigmoid(emotion_logits)
sentiment_probs = torch.softmax(sentiment_logits, dim=1)
# Convert tensors to lists for easier handling
emotion_probs_list = (emotion_probs * 100).tolist()[0] # Get the first (and only) sample and convert to %
sentiment_probs_list = (sentiment_probs * 100).tolist()[0] # Get the first (and only) sample and convert to %
# Define the sentiment and emotion labels
sentiment_labels = ['negative', 'neutral', 'positive']
emotion_labels = [
'joy', 'trust', 'excitement', 'gratitude', 'hope', 'love', 'pride',
'anger', 'disgust', 'fear', 'sadness', 'anxiety', 'frustration', 'guilt',
'disappointment', 'surprise', 'anticipation', 'neutral'
]
# Threshold for displaying probabilities
threshold = 0.0
# Map emotion probabilities to their corresponding labels
emotion_results = {label: prob for label, prob in zip(emotion_labels, emotion_probs_list) if prob > 0.30}
# Map sentiment probabilities to their corresponding labels
sentiment_results = {label: prob for label, prob in zip(sentiment_labels, sentiment_probs_list) if prob > threshold}
return emotion_results, sentiment_results
# Example usage
sample_texts = ["Απολαύσαμε μια υπέροχη βραδιά σε αυτό το εστιατόριο. "
"Το μενού ήταν πολύ καλά σχεδιασμένο και κάθε πιάτο ήταν μια γευστική έκπληξη. "
"Η εξυπηρέτηση ήταν άψογη και η ατμόσφαιρα ευχάριστη. Σίγουρα θα επιστρέψουμε για άλλη μια φορά."]
print("Text: ", sample_texts[0])
emotion_results, sentiment_results = predict(sample_texts)
print("\nSentiment probabilities (%):")
for label, prob in sentiment_results.items():
print(f" {label}: {prob:.2f}%")
# Print the results
print("\nEmotion probabilities (%):")
for label, prob in emotion_results.items():
print(f" {label}: {prob:.2f}%")
# Change the text and predict again
# Print the results
print("\n======")
print("\nNew prediction:")
sample_texts = ["Η τελευταία μας εμπειρία στο εστιατόριο αυτό δεν ήταν ιδιαίτερα θετική. "
"Αν και ο χώρος είχε μια ενδιαφέρουσα ατμόσφαιρα, το φαγητό ήταν μέτριο και η εξυπηρέτηση ήταν αργή. "
"Οι τιμές ήταν επίσης απογοητευτικές για την ποιότητα που προσφέρθηκε."]
print("Text: ", sample_texts[0])
emotion_results, sentiment_results = predict(sample_texts)
print("\nSentiment probabilities (%):")
for label, prob in sentiment_results.items():
print(f" {label}: {prob:.2f}%")
print("\nEmotion probabilities (%):")
for label, prob in emotion_results.items():
print(f" {label}: {prob:.2f}%")
```
Expected output:
```context
Text: Απολαύσαμε μια υπέροχη βραδιά σε αυτό το εστιατόριο. Το μενού ήταν πολύ καλά σχεδιασμένο και κάθε πιάτο ήταν μια γευστική έκπληξη. Η εξυπηρέτηση ήταν άψογη και η ατμόσφαιρα ευχάριστη. Σίγουρα θα επιστρέψουμε για άλλη μια φορά.
Sentiment probabilities (%):
negative: 17.36%
neutral: 11.31%
positive: 71.33%
Emotion probabilities (%):
joy: 99.92%
trust: 93.40%
excitement: 73.43%
gratitude: 97.52%
hope: 0.33%
love: 12.20%
pride: 1.09%
anticipation: 0.31%
======
New prediction:
Text: Η τελευταία μας εμπειρία στο εστιατόριο αυτό δεν ήταν ιδιαίτερα θετική. Αν και ο χώρος είχε μια ενδιαφέρουσα ατμόσφαιρα, το φαγητό ήταν μέτριο και η εξυπηρέτηση ήταν αργή. Οι τιμές ήταν επίσης απογοητευτικές για την ποιότητα που προσφέρθηκε.
Sentiment probabilities (%):
negative: 58.39%
neutral: 16.34%
positive: 25.27%
Emotion probabilities (%):
frustration: 68.61%
disappointment: 99.84%
neutral: 0.75%
```
## Evaluation
Due to time constraints, there is no official benchmarking done yet.
However, the evaluation on a test dataset is the following:
Evaluation results for emotion classification:
'eval_f1': 0.9448,
'eval_loss': 0.0322,
'eval_accuracy': 0.7857,
'eval_hamming_loss': 0.0141,
'eval_precision': 0.9785,
'eval_recall': 0.9133,
Enjoy! |