electrical-classification-ModernBERT-base

Model description

This model is fine-tuned from answerdotai/ModernBERT-base for text classification tasks, specifically sentiment analysis of customer feedback on electrical devices - circuit breakers, transformers, smart meters, inverters, solar panels, power strips etc. The model has been optimized to classify sentiments into categories such as Positive, Negative, Neutral, and Mixed with high precision and recall, making it ideal for analyzing product reviews, customer surveys, and other feedback to derive actionable insights.

Training Data

The model was trained on the disham993/ElectricalDeviceFeedbackBalanced dataset, which has been carefully balanced to address class imbalances effectively. Original dataset which is imbalanced: disham993/ElectricalDeviceFeedback.

Model Details

Training procedure

Training hyperparameters

The model was fine-tuned using the following hyperparameters:

  • Evaluation Strategy: epoch
  • Learning Rate: 1e-5
  • Batch Size: 64 (for both training and evaluation)
  • Number of Epochs: 5
  • Weight Decay: 0.01

Evaluation results

The following metrics were achieved during evaluation:

  • F1 Score: 0.8899
  • Accuracy: 0.8875
  • eval_runtime: 1.2105
  • eval_samples_per_second: 1116.881
  • eval_steps_per_second: 18.174

Usage

You can use this model for Sentiment Analysis of the Electrical Device Feedback as follows:

from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline

model_name = "disham993/electrical-classification-ModernBERT-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
nlp = pipeline("text-classification", model=model, tokenizer=tokenizer)

text = "The new washing machine is efficient but produces a bit of noise."
classification_results = nlp(text)
print(classification_results)

Limitations and bias

The dataset includes synthetic data generated using Llama 3.1:8b, and despite careful optimization and prompt engineering, the model is not immune to errors in labeling. Additionally, as LLM technology is still in its early stages, there may be inherent inaccuracies or biases in the generated data that can impact the model's performance.

This model is intended for research and educational purposes only, and users are encouraged to validate results before applying them to critical applications.

Training Infrastructure

For a complete guide covering the entire process - from data tokenization to pushing the model to the Hugging Face Hub - please refer to the GitHub repository.

Last update

2025-01-05

Downloads last month
41
Safetensors
Model size
150M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for disham993/electrical-classification-ModernBERT-base

Finetuned
(150)
this model

Dataset used to train disham993/electrical-classification-ModernBERT-base

Space using disham993/electrical-classification-ModernBERT-base 1

Collection including disham993/electrical-classification-ModernBERT-base