Shivangsinha
commited on
Commit
•
f925544
1
Parent(s):
d478561
Update README.md
Browse files
README.md
CHANGED
@@ -2,6 +2,7 @@
|
|
2 |
license: apache-2.0
|
3 |
datasets:
|
4 |
- stanfordnlp/imdb
|
|
|
5 |
language:
|
6 |
- en
|
7 |
base_model:
|
@@ -12,102 +13,90 @@ tags:
|
|
12 |
- gen-ai
|
13 |
- llms
|
14 |
---
|
15 |
-
# Model Card for Model
|
16 |
|
17 |
-
|
18 |
-
|
19 |
-
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
|
20 |
|
21 |
## Model Details
|
22 |
|
23 |
### Model Description
|
24 |
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
|
29 |
-
- **Developed by:** [
|
30 |
-
- **Funded by [optional]:** [
|
31 |
-
- **Shared by [optional]:** [
|
32 |
-
- **Model type:**
|
33 |
-
- **Language(s) (NLP):**
|
34 |
-
- **License:**
|
35 |
-
- **
|
36 |
|
37 |
### Model Sources [optional]
|
38 |
|
39 |
-
|
40 |
-
|
41 |
-
- **
|
42 |
-
- **Paper [optional]:** [More Information Needed]
|
43 |
-
- **Demo [optional]:** [More Information Needed]
|
44 |
|
45 |
## Uses
|
46 |
|
47 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
48 |
-
|
49 |
### Direct Use
|
50 |
|
51 |
-
|
52 |
-
|
53 |
-
[More Information Needed]
|
54 |
|
55 |
### Downstream Use [optional]
|
56 |
|
57 |
-
|
58 |
-
|
59 |
-
[More Information Needed]
|
60 |
|
61 |
### Out-of-Scope Use
|
62 |
|
63 |
-
|
64 |
-
|
65 |
-
[More Information Needed]
|
66 |
|
67 |
## Bias, Risks, and Limitations
|
68 |
|
69 |
-
|
70 |
-
|
71 |
-
[More Information Needed]
|
72 |
|
73 |
### Recommendations
|
74 |
|
75 |
-
|
76 |
-
|
77 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
78 |
|
79 |
## How to Get Started with the Model
|
80 |
|
81 |
-
Use the code below to get started with the model
|
82 |
|
83 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
84 |
|
85 |
## Training Details
|
86 |
|
87 |
### Training Data
|
88 |
|
89 |
-
|
90 |
-
|
91 |
-
[More Information Needed]
|
92 |
|
93 |
### Training Procedure
|
94 |
|
95 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
96 |
-
|
97 |
#### Preprocessing [optional]
|
98 |
|
99 |
-
|
100 |
|
101 |
|
102 |
#### Training Hyperparameters
|
103 |
|
104 |
-
|
|
|
|
|
|
|
105 |
|
106 |
#### Speeds, Sizes, Times [optional]
|
107 |
|
108 |
-
|
109 |
-
|
110 |
-
[More Information Needed]
|
111 |
|
112 |
## Evaluation
|
113 |
|
@@ -117,35 +106,28 @@ Use the code below to get started with the model.
|
|
117 |
|
118 |
#### Testing Data
|
119 |
|
120 |
-
|
121 |
-
|
122 |
-
[More Information Needed]
|
123 |
|
124 |
#### Factors
|
125 |
|
126 |
-
|
127 |
-
|
128 |
-
[More Information Needed]
|
129 |
|
130 |
#### Metrics
|
131 |
|
132 |
-
|
133 |
-
|
134 |
-
[More Information Needed]
|
135 |
|
136 |
### Results
|
137 |
|
138 |
-
|
139 |
|
140 |
#### Summary
|
141 |
|
142 |
-
|
143 |
|
144 |
## Model Examination [optional]
|
145 |
|
146 |
-
|
147 |
-
|
148 |
-
[More Information Needed]
|
149 |
|
150 |
## Environmental Impact
|
151 |
|
@@ -153,17 +135,17 @@ Use the code below to get started with the model.
|
|
153 |
|
154 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
155 |
|
156 |
-
- **Hardware Type:**
|
157 |
-
- **Hours used:**
|
158 |
-
- **Cloud Provider:**
|
159 |
-
- **Compute Region:**
|
160 |
-
- **Carbon Emitted:**
|
161 |
|
162 |
## Technical Specifications [optional]
|
163 |
|
164 |
### Model Architecture and Objective
|
165 |
|
166 |
-
|
167 |
|
168 |
### Compute Infrastructure
|
169 |
|
@@ -171,11 +153,11 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
171 |
|
172 |
#### Hardware
|
173 |
|
174 |
-
|
175 |
|
176 |
#### Software
|
177 |
|
178 |
-
|
179 |
|
180 |
## Citation [optional]
|
181 |
|
@@ -200,9 +182,10 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
200 |
[More Information Needed]
|
201 |
|
202 |
## Model Card Authors [optional]
|
203 |
-
|
204 |
-
|
|
|
205 |
|
206 |
## Model Card Contact
|
207 |
|
208 |
-
|
|
|
2 |
license: apache-2.0
|
3 |
datasets:
|
4 |
- stanfordnlp/imdb
|
5 |
+
- bharatiyabytes/sentimentWithSarcasm
|
6 |
language:
|
7 |
- en
|
8 |
base_model:
|
|
|
13 |
- gen-ai
|
14 |
- llms
|
15 |
---
|
16 |
+
# Model Card for Sarcasm-Enhanced Sentiment Analysis Model
|
17 |
|
18 |
+
This model performs sentiment analysis with a specific focus on detecting sarcasm in textual content. It is fine-tuned on a combination of standard sentiment datasets and specialized sarcastic data, allowing for more nuanced sentiment classification that accounts for sarcastic language.
|
|
|
|
|
19 |
|
20 |
## Model Details
|
21 |
|
22 |
### Model Description
|
23 |
|
24 |
+
This model leverages the **Flan-T5-small** transformer architecture, fine-tuned on datasets including **IMDB** for sentiment analysis and **bharatiyabytes/sentimentWithSarcasm** for sarcasm detection. By combining these datasets, the model is better equipped to differentiate between sarcastic and genuine sentiment expressions, improving sentiment analysis accuracy in contexts where sarcasm is prevalent.
|
|
|
|
|
25 |
|
26 |
+
- **Developed by:** [Your Name/Organization]
|
27 |
+
- **Funded by [optional]:** [Your Funder, if applicable]
|
28 |
+
- **Shared by [optional]:** [Your Organization, if applicable]
|
29 |
+
- **Model type:** Text Classification
|
30 |
+
- **Language(s) (NLP):** English (en)
|
31 |
+
- **License:** Apache 2.0
|
32 |
+
- **Fine-tuned from model:** google/flan-t5-small
|
33 |
|
34 |
### Model Sources [optional]
|
35 |
|
36 |
+
- **Repository:** [Repository link on Hugging Face]
|
37 |
+
- **Paper [optional]:** [Link to paper if available]
|
38 |
+
- **Demo [optional]:** [Link to demo if available]
|
|
|
|
|
39 |
|
40 |
## Uses
|
41 |
|
|
|
|
|
42 |
### Direct Use
|
43 |
|
44 |
+
This model can be used directly for sentiment classification on texts where sarcasm may obscure intended sentiment. It provides accurate classifications by considering nuanced expressions, making it ideal for social media analysis, customer feedback processing, and sarcasm-rich content sources.
|
|
|
|
|
45 |
|
46 |
### Downstream Use [optional]
|
47 |
|
48 |
+
This model can serve as a base for further fine-tuning for domains that require sarcasm-aware sentiment analysis, such as customer service, public relations, and social media monitoring applications.
|
|
|
|
|
49 |
|
50 |
### Out-of-Scope Use
|
51 |
|
52 |
+
The model may not perform well on texts with heavy dialects or informal language that goes beyond the sarcasm in the fine-tuning data. It is not intended for multi-lingual sarcasm detection.
|
|
|
|
|
53 |
|
54 |
## Bias, Risks, and Limitations
|
55 |
|
56 |
+
As with many sentiment analysis models, there may be inherent biases in the sarcasm and sentiment labels in the training datasets, potentially affecting model performance across different demographic or cultural groups. Users should be cautious when using this model in critical decision-making contexts.
|
|
|
|
|
57 |
|
58 |
### Recommendations
|
59 |
|
60 |
+
Users should perform additional validation on specific datasets to ensure that model predictions align with intended use cases, especially in high-stakes applications.
|
|
|
|
|
61 |
|
62 |
## How to Get Started with the Model
|
63 |
|
64 |
+
Use the code below to get started with the model:
|
65 |
|
66 |
+
```python
|
67 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
68 |
+
import torch
|
69 |
+
|
70 |
+
tokenizer = AutoTokenizer.from_pretrained("your-username/sarcasm-sentiment-analysis")
|
71 |
+
model = AutoModelForSequenceClassification.from_pretrained("your-username/sarcasm-sentiment-analysis")
|
72 |
+
|
73 |
+
inputs = tokenizer("Your input text here", return_tensors="pt")
|
74 |
+
outputs = model(**inputs)
|
75 |
+
predictions = torch.argmax(outputs.logits, dim=-1)
|
76 |
|
77 |
## Training Details
|
78 |
|
79 |
### Training Data
|
80 |
|
81 |
+
The model was fine-tuned on a combination of IMDB (a general sentiment analysis dataset) and bharatiyabytes/sentimentWithSarcasm (designed to capture sarcastic sentiment). This blend improves the model’s ability to identify nuanced sentiment.
|
|
|
|
|
82 |
|
83 |
### Training Procedure
|
84 |
|
|
|
|
|
85 |
#### Preprocessing [optional]
|
86 |
|
87 |
+
Standard text preprocessing methods were applied, such as tokenization and lowercase transformation.
|
88 |
|
89 |
|
90 |
#### Training Hyperparameters
|
91 |
|
92 |
+
Training regime: FP32
|
93 |
+
Epochs: 3
|
94 |
+
Batch Size: 16
|
95 |
+
Learning Rate: 3e-5
|
96 |
|
97 |
#### Speeds, Sizes, Times [optional]
|
98 |
|
99 |
+
Training took approximately 4 hours on an NVIDIA V100 GPU with a model size of 60M parameters.
|
|
|
|
|
100 |
|
101 |
## Evaluation
|
102 |
|
|
|
106 |
|
107 |
#### Testing Data
|
108 |
|
109 |
+
he model was evaluated on the IMDB and bharatiyabytes/sentimentWithSarcasm test splits.
|
|
|
|
|
110 |
|
111 |
#### Factors
|
112 |
|
113 |
+
Evaluation considers sentiment disaggregation by sarcastic vs. non-sarcastic samples.
|
|
|
|
|
114 |
|
115 |
#### Metrics
|
116 |
|
117 |
+
-Accuracy: Measures overall sentiment classification accuracy.
|
118 |
+
-F1 Score (Sarcasm): Evaluates the model’s sarcasm detection capability, which is key for accurate sarcastic sentiment handling.
|
|
|
119 |
|
120 |
### Results
|
121 |
|
122 |
+
The model achieved an accuracy of 88% on the sentiment classification task and an F1 score of 0.83 on sarcasm detection.
|
123 |
|
124 |
#### Summary
|
125 |
|
126 |
+
The model shows strong performance in sarcasm-sensitive sentiment analysis, making it suitable for applications where nuanced sentiment interpretation is crucial.
|
127 |
|
128 |
## Model Examination [optional]
|
129 |
|
130 |
+
The model’s predictions have been examined to ensure that sarcastic content is accurately labeled, using interpretability tools such as SHAP to visualize model attention on sarcastic phrases.
|
|
|
|
|
131 |
|
132 |
## Environmental Impact
|
133 |
|
|
|
135 |
|
136 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
137 |
|
138 |
+
- **Hardware Type:** NVIDIA V100 GPU
|
139 |
+
- **Hours used:** Approximately 4 hours
|
140 |
+
- **Cloud Provider:** Google
|
141 |
+
- **Compute Region:** USA
|
142 |
+
- **Carbon Emitted:** NA
|
143 |
|
144 |
## Technical Specifications [optional]
|
145 |
|
146 |
### Model Architecture and Objective
|
147 |
|
148 |
+
The model uses the Flan-T5-small architecture fine-tuned for binary sentiment classification with sarcasm detection as an enhancement.
|
149 |
|
150 |
### Compute Infrastructure
|
151 |
|
|
|
153 |
|
154 |
#### Hardware
|
155 |
|
156 |
+
NVIDIA V100 GPU
|
157 |
|
158 |
#### Software
|
159 |
|
160 |
+
Hugging Face Transformers Library, PyTorch
|
161 |
|
162 |
## Citation [optional]
|
163 |
|
|
|
182 |
[More Information Needed]
|
183 |
|
184 |
## Model Card Authors [optional]
|
185 |
+
Shivang sinha
|
186 |
+
Garima Sohi
|
187 |
+
Parteek
|
188 |
|
189 |
## Model Card Contact
|
190 |
|
191 |