--- license: mit datasets: - dair-ai/emotion language: - en library_name: transformers widget: - text: I am so happy with the results! - text: I am so pissed with the results! tags: - debarta - debarta-xlarge - emotions-classifier --- # 🌟 Emotion-X: Fine-tuned DeBERTa-Xlarge Based Emotion Detection 🌟 This is a fine-tuned version of [microsoft/deberta-xlarge-mnli](https://huggingface.co/microsoft/deberta-xlarge-mnli) for emotion detection on the [dair-ai/emotion](https://huggingface.co/dair-ai/emotion) dataset. ## 🚀 Overview Emotion-X is a state-of-the-art emotion detection model fine-tuned from Microsoft's DeBERTa-Xlarge model. Designed to accurately classify text into one of six emotional categories, Emotion-X leverages the robust capabilities of DeBERTa and fine-tunes it on a comprehensive emotion dataset, ensuring high accuracy and reliability. ## 📜 Model Details - **🆕 Model Name:** `AnkitAI/deberta-xlarge-base-emotions-classifier` - **🔗 Base Model:** `microsoft/deberta-xlarge-mnli` - **📊 Dataset:** [dair-ai/emotion](https://huggingface.co/dair-ai/emotion) - **⚙️ Fine-tuning:** This model was fine-tuned for emotion detection with a classification head for six emotional categories (anger, disgust, fear, joy, sadness, surprise). ## 🏋️ Training The model was trained using the following parameters: - **🔧 Learning Rate:** 2e-5 - **📦 Batch Size:** 4 - **⏳ Epochs:** 3 - **⚖️ Weight Decay:** 0.01 - **📅 Evaluation Strategy:** Epoch ### 🏋️ Training Details - **📉 Eval Loss:** 0.0858 - **⏱️ Eval Runtime:** 110070.6349 seconds - **📈 Eval Samples/Second:** 78.495 - **🌀 Eval Steps/Second:** 2.453 - **🔄 Epoch:** 3.0 - **📉 Train Loss:** 0.1049 - **⏳ Eval Accuracy:** 94.6% - **🌀 Eval Precision:** 94.8% - **⏱️ Eval Recall:** 94.5% - **📈 Eval F1 Score:** 94.7% ## 🚀 Usage You can use this model directly with the Hugging Face `transformers` library: ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer model_name = "AnkitAI/deberta-xlarge-base-emotions-classifier" model = AutoModelForSequenceClassification.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) # Example usage def predict_emotion(text): inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128) outputs = model(**inputs) logits = outputs.logits predictions = logits.argmax(dim=1) return predictions text = "I'm so happy with the results!" emotion = predict_emotion(text) print("Detected Emotion:", emotion) ``` ## 📝 Emotion Labels - 😠 Anger - 🤢 Disgust - 😨 Fear - 😊 Joy - 😢 Sadness - 😲 Surprise ## 📜 Model Card Data | Parameter | Value | |-------------------------------|---------------------------| | Model Name | microsoft/deberta-xlarge-mnli | | Training Dataset | dair-ai/emotion | | Number of Training Epochs | 3 | | Learning Rate | 2e-5 | | Per Device Train Batch Size | 4 | | Evaluation Strategy | Epoch | | Best Model Accuracy | 94.6% | ## 📜 License This model is licensed under the [MIT License](LICENSE).