YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Fine-tuned XLM-R Model for sundanese Sentiment Analysis

This is a fine-tuned XLM-R model for sentiment analysis in sundanese.

Model Details

  • Model Name: XLM-R Sentiment Analysis
  • Language: sundanese
  • Fine-tuning Dataset: DGurgurov/sundanese_sa

Training Details

  • Epochs: 20
  • Batch Size: 32 (train), 64 (eval)
  • Optimizer: AdamW
  • Learning Rate: 5e-5

Performance Metrics

  • Accuracy: 0.88158
  • Macro F1: 0.88106
  • Micro F1: 0.88158

Usage

To use this model, you can load it with the Hugging Face Transformers library:

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("DGurgurov/xlm-r_sundanese_sentiment")
model = AutoModelForSequenceClassification.from_pretrained("DGurgurov/xlm-r_sundanese_sentiment")

License

[MIT]

Downloads last month
19
Safetensors
Model size
278M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including DGurgurov/xlm-r_sundanese_sentiment