File size: 2,824 Bytes
fae5f39
 
c144835
 
 
 
 
 
 
 
 
 
 
fae5f39
 
c144835
fae5f39
c144835
fae5f39
 
 
 
 
 
 
c144835
 
 
 
 
 
 
fae5f39
c144835
fae5f39
c144835
 
fae5f39
 
 
 
 
c144835
fae5f39
c144835
fae5f39
c144835
fae5f39
 
 
c144835
fae5f39
 
 
c144835
fae5f39
 
 
c144835
fae5f39
 
 
c144835
fae5f39
c144835
 
fae5f39
c144835
fae5f39
c144835
 
fae5f39
c144835
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
library_name: transformers
tags:
- classification
- sentiment
license: mit
language:
- en
metrics:
- accuracy
base_model:
- google-bert/bert-base-uncased
pipeline_tag: text-classification
---

# Model Card for MESSItom/BERT-review-sentiment-analysis

This model is fine-tuned from BERT to perform sentiment analysis on a custom dataset containing student reviews about campus events or amenities. The objective is to classify the sentiments (positive, negative, neutral) while maintaining high performance metrics like accuracy.

## Model Details

### Model Description

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

- **Developed by:** Messy Tom Binoy
- **Funded by:** No funding, self-funded
- **Shared by:** Messy Tom Binoy
- **Model type:** BERT
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** google-bert/bert-base-uncased

### Model Sources

- **Repository:** [GitHub Repository](https://github.com/messi10tom/Fine-Tuning-BERT-for-Sentiment-Analysis/tree/main)
- **Demo:** [GitHub Demo](https://github.com/messi10tom/Fine-Tuning-BERT-for-Sentiment-Analysis/tree/main)

## Uses

### Direct Use

The model can be used directly for sentiment classification of student reviews about campus events or amenities.

### Downstream Use

The model can be fine-tuned further for other sentiment analysis tasks or integrated into larger applications for sentiment classification.

### Out-of-Scope Use

The model is not suitable for tasks outside sentiment analysis, such as language translation or text generation.

## Bias, Risks, and Limitations

The model may inherit biases from the pre-trained BERT model and the custom dataset used for fine-tuning. It may not perform well on reviews that are significantly different from the training data.

### Recommendations

Users should be aware of the potential biases and limitations of the model. It is recommended to evaluate the model on a diverse set of reviews to understand its performance and limitations.

## How to Get Started with the Model

Use the code below to get started with the model:

```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer

model_id = "MESSItom/BERT-review-sentiment-analysis"

model = AutoModelForSequenceClassification.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)

def predict_sentiment(text):
    inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True, max_length=512)
    with torch.no_grad():
        outputs = model(**inputs)
    logits = outputs.logits
    predicted_class = torch.argmax(logits, dim=-1).item()
    class_names = ['positive', 'neutral', 'negative']
    sentiment = class_names[predicted_class]
    return sentiment