Text Classification
Safetensors
English
t5
sarcasm
gen-ai
llms
File size: 6,471 Bytes
2a260a0
 
 
 
f925544
2a260a0
 
 
 
a344f50
2a260a0
 
 
 
d478561
f925544
d478561
f925544
d478561
 
 
 
 
f925544
d478561
d44df85
 
f925544
 
 
 
d478561
 
 
d44df85
f925544
 
d478561
 
 
 
 
f925544
d478561
d44df85
d478561
f925544
d478561
 
 
f925544
d478561
 
 
f925544
d478561
 
 
f925544
d478561
 
 
f925544
d478561
f925544
 
 
 
 
 
 
 
 
 
d44df85
d478561
 
 
 
f925544
d478561
 
 
d44df85
d478561
f925544
d478561
 
 
 
f925544
 
 
 
d478561
d44df85
d478561
f925544
d478561
 
 
 
 
 
 
 
 
f925544
d478561
 
 
f925544
d478561
 
 
f925544
 
d478561
 
 
f925544
d478561
 
 
f925544
d478561
d44df85
d478561
f925544
d478561
 
 
 
 
 
 
f925544
 
 
 
 
d478561
d44df85
d478561
 
 
f925544
d478561
 
 
 
 
 
 
f925544
d478561
 
 
f925544
d478561
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36e44a9
 
 
 
d478561
 
 
f925544
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
---
license: apache-2.0
datasets:
- stanfordnlp/imdb
- bharatiyabytes/sentimentWithSarcasm
language:
- en
base_model:
- google/flan-t5-small
pipeline_tag: text-classification
tags:
- sarcasm
- gen-ai
- llms
---
# Model Card for Sarcasm-Enhanced Sentiment Analysis Model

This model performs sentiment analysis with a specific focus on detecting sarcasm in textual content. It is fine-tuned on a combination of standard sentiment datasets and specialized sarcastic data, allowing for more nuanced sentiment classification that accounts for sarcastic language.

## Model Details

### Model Description

This model leverages the **Flan-T5-small** transformer architecture, fine-tuned on datasets including **IMDB** for sentiment analysis and **bharatiyabytes/sentimentWithSarcasm** for sarcasm detection. By combining these datasets, the model is better equipped to differentiate between sarcastic and genuine sentiment expressions, improving sentiment analysis accuracy in contexts where sarcasm is prevalent.

- **Developed by:** bharatiyabytes
- **Funded by [optional]:** Not yet funded
- **Model type:** Text Classification
- **Language(s) (NLP):** English (en)
- **License:** Apache 2.0
- **Fine-tuned from model:** google/flan-t5-small

### Model Sources [optional]

- **Repository:** https://github.com/sohi-g/lets-talk-the-hype.git
- **Paper [optional]:** [Link to paper if available]
- **Demo [optional]:** [Link to demo if available]

## Uses

### Direct Use

This model can be used directly for sentiment classification on texts where sarcasm may obscure intended sentiment. It provides accurate classifications by considering nuanced expressions, making it ideal for social media analysis, customer feedback processing, and sarcasm-rich content sources.

### Downstream Use 

This model can serve as a base for further fine-tuning for domains that require sarcasm-aware sentiment analysis, such as customer service, public relations, and social media monitoring applications.

### Out-of-Scope Use

The model may not perform well on texts with heavy dialects or informal language that goes beyond the sarcasm in the fine-tuning data. It is not intended for multi-lingual sarcasm detection.

## Bias, Risks, and Limitations

As with many sentiment analysis models, there may be inherent biases in the sarcasm and sentiment labels in the training datasets, potentially affecting model performance across different demographic or cultural groups. Users should be cautious when using this model in critical decision-making contexts.

### Recommendations

Users should perform additional validation on specific datasets to ensure that model predictions align with intended use cases, especially in high-stakes applications.

## How to Get Started with the Model

Use the code below to get started with the model:

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

tokenizer = AutoTokenizer.from_pretrained("your-username/sarcasm-sentiment-analysis")
model = AutoModelForSequenceClassification.from_pretrained("your-username/sarcasm-sentiment-analysis")

inputs = tokenizer("Your input text here", return_tensors="pt")
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=-1)
```
## Training Details

### Training Data

The model was fine-tuned on a combination of IMDB (a general sentiment analysis dataset) and bharatiyabytes/sentimentWithSarcasm (designed to capture sarcastic sentiment). This blend improves the model’s ability to identify nuanced sentiment.

### Training Procedure

#### Preprocessing

Standard text preprocessing methods were applied, such as tokenization and lowercase transformation.


#### Training Hyperparameters

Training regime: FP32
Epochs: 3
Batch Size: 16
Learning Rate: 3e-5

#### Speeds, Sizes, Times

Training took approximately 4 hours on an NVIDIA V100 GPU with a model size of 60M parameters.

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

### Testing Data, Factors & Metrics

#### Testing Data

he model was evaluated on the IMDB and bharatiyabytes/sentimentWithSarcasm test splits.

#### Factors

Evaluation considers sentiment disaggregation by sarcastic vs. non-sarcastic samples.

#### Metrics

-Accuracy: Measures overall sentiment classification accuracy.
-F1 Score (Sarcasm): Evaluates the model’s sarcasm detection capability, which is key for accurate sarcastic sentiment handling.

### Results

The model achieved an accuracy of 88% on the sentiment classification task and an F1 score of 0.83 on sarcasm detection.

#### Summary

The model shows strong performance in sarcasm-sensitive sentiment analysis, making it suitable for applications where nuanced sentiment interpretation is crucial.

## Model Examination

The model’s predictions have been examined to ensure that sarcastic content is accurately labeled, using interpretability tools such as SHAP to visualize model attention on sarcastic phrases.

## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:**  NVIDIA V100 GPU
- **Hours used:** Approximately 4 hours
- **Cloud Provider:** Google
- **Compute Region:** USA
- **Carbon Emitted:** NA

## Technical Specifications

### Model Architecture and Objective

The model uses the Flan-T5-small architecture fine-tuned for binary sentiment classification with sarcasm detection as an enhancement.

### Compute Infrastructure

[More Information Needed]

#### Hardware

NVIDIA V100 GPU

#### Software

Hugging Face Transformers Library, PyTorch

## Citation [optional]

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[More Information Needed]

**APA:**

[More Information Needed]

## Glossary [optional]

<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->

[More Information Needed]

## More Information [optional]

[More Information Needed]

## Model Card Authors 
- **Shivang sinha**  
- **Garima Sohi** 
- **Parteek** 

## Model Card Contact

[email protected]