Update README.md
Browse files
README.md
CHANGED
@@ -11,38 +11,107 @@ base_model:
|
|
11 |
- google-bert/bert-base-uncased
|
12 |
library_name: transformers
|
13 |
---
|
14 |
-
# BERT Fine-Tuned for Frugal AI Challenge - Text Task
|
15 |
|
16 |
## Model Overview
|
17 |
-
This model is a fine-tuned version of `bert-base-uncased` for multi-class text classification. It was
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
## Training Details
|
20 |
-
- **Dataset**: Frugal AI Challenge Text Task
|
21 |
-
- **Architecture**: BERT with a custom classification head
|
22 |
- **Optimizer**: AdamW
|
23 |
- **Learning Rate**: 2e-5
|
|
|
24 |
- **Epochs**: 3
|
25 |
-
- **
|
|
|
|
|
26 |
|
27 |
## Performance Metrics (Validation Set)
|
28 |
-
|
29 |
-
|
30 |
-
|
|
31 |
-
|
32 |
-
|
|
33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
## Training Evolution
|
36 |
### Training and Validation Loss
|
|
|
|
|
37 |

|
38 |
|
39 |
### Validation Accuracy
|
|
|
|
|
40 |

|
41 |
|
42 |
-
##
|
|
|
43 |
|
|
|
44 |
|
45 |
-
##
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
-
|
|
|
|
11 |
- google-bert/bert-base-uncased
|
12 |
library_name: transformers
|
13 |
---
|
14 |
+
# Model Card: BERT Fine-Tuned for Frugal AI Challenge - Text Task
|
15 |
|
16 |
## Model Overview
|
17 |
+
This model is a fine-tuned version of the `bert-base-uncased` transformer model, specifically tailored for multi-class text classification. It was developed as part of the Frugal AI Challenge to classify text into eight distinct categories. The model incorporates a custom classification head and leverages class weighting to address dataset imbalance.
|
18 |
+
|
19 |
+
## Dataset
|
20 |
+
- **Source**: Frugal AI Challenge Text Task Dataset
|
21 |
+
- **Classes**: 8 unique labels representing various categories of text
|
22 |
+
- **Preprocessing**: Tokenization using `BertTokenizer` with padding and truncation to a maximum sequence length of 128.
|
23 |
+
|
24 |
+
## Model Architecture
|
25 |
+
- **Base Model**: `bert-base-uncased`
|
26 |
+
- **Classification Head**: Custom head with weighted cross-entropy loss to handle class imbalance.
|
27 |
+
- **Number of Labels**: 8
|
28 |
|
29 |
## Training Details
|
|
|
|
|
30 |
- **Optimizer**: AdamW
|
31 |
- **Learning Rate**: 2e-5
|
32 |
+
- **Batch Size**: 16 (for both training and evaluation)
|
33 |
- **Epochs**: 3
|
34 |
+
- **Weight Decay**: 0.01
|
35 |
+
- **Evaluation Strategy**: Performed at the end of each epoch
|
36 |
+
- **Hardware**: Trained on GPUs for efficient computation
|
37 |
|
38 |
## Performance Metrics (Validation Set)
|
39 |
+
The model achieved the following performance metrics on the validation set:
|
40 |
+
|
41 |
+
| Class | Precision | Recall | F1-Score | Support |
|
42 |
+
|--------------------------------|-----------|--------|----------|---------|
|
43 |
+
| 5_science_unreliable | 0.65 | 0.71 | 0.68 | 160 |
|
44 |
+
| 1_not_happening | 0.69 | 0.80 | 0.74 | 148 |
|
45 |
+
| 4_solutions_harmful_unnecessary| 0.62 | 0.72 | 0.66 | 155 |
|
46 |
+
| 0_not_relevant | 0.82 | 0.60 | 0.70 | 324 |
|
47 |
+
| 6_proponents_biased | 0.62 | 0.64 | 0.63 | 157 |
|
48 |
+
| 7_fossil_fuels_needed | 0.59 | 0.68 | 0.63 | 57 |
|
49 |
+
| 2_not_human | 0.68 | 0.70 | 0.69 | 141 |
|
50 |
+
| 3_not_bad | 0.59 | 0.62 | 0.61 | 77 |
|
51 |
+
|
52 |
+
- **Overall Accuracy**: 68%
|
53 |
+
- **Macro Average**: Precision: 0.66, Recall: 0.68, F1-Score: 0.67
|
54 |
+
- **Weighted Average**: Precision: 0.69, Recall: 0.68, F1-Score: 0.68
|
55 |
|
56 |
## Training Evolution
|
57 |
### Training and Validation Loss
|
58 |
+
The training and validation loss evolution over epochs is shown below:
|
59 |
+
|
60 |

|
61 |
|
62 |
### Validation Accuracy
|
63 |
+
The validation accuracy evolution over epochs is shown below:
|
64 |
+
|
65 |

|
66 |
|
67 |
+
## Confusion Matrix
|
68 |
+
The confusion matrix below illustrates the model's performance on the validation set, highlighting areas of strength and potential misclassifications:
|
69 |
|
70 |
+

|
71 |
|
72 |
+
## Key Features
|
73 |
+
- **Class Weighting**: Addressed dataset imbalance by incorporating class weights during training.
|
74 |
+
- **Custom Loss Function**: Used weighted cross-entropy loss for better handling of underrepresented classes.
|
75 |
+
- **Evaluation Metrics**: Accuracy, precision, recall, and F1-score were computed to provide a comprehensive understanding of the model's performance.
|
76 |
+
|
77 |
+
## Usage
|
78 |
+
This model can be used for multi-class text classification tasks where the input text needs to be categorized into one of the eight predefined classes. It is particularly suited for datasets with class imbalance, thanks to its weighted loss function.
|
79 |
+
|
80 |
+
### Example Usage
|
81 |
+
```python
|
82 |
+
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
83 |
+
|
84 |
+
# Load the fine-tuned model and tokenizer
|
85 |
+
model = AutoModelForSequenceClassification.from_pretrained("ParisNeo/bert-frugal-ai-text-classification")
|
86 |
+
tokenizer = AutoTokenizer.from_pretrained("ParisNeo/bert-frugal-ai-text-classification")
|
87 |
+
|
88 |
+
# Tokenize input text
|
89 |
+
text = "Your input text here"
|
90 |
+
inputs = tokenizer(text, return_tensors="pt", padding="max_length", truncation=True, max_length=128)
|
91 |
+
|
92 |
+
# Perform inference
|
93 |
+
outputs = model(**inputs)
|
94 |
+
predicted_class = outputs.logits.argmax(-1).item()
|
95 |
+
|
96 |
+
print(f"Predicted Class: {predicted_class}")
|
97 |
+
```
|
98 |
+
|
99 |
+
## Limitations
|
100 |
+
- **Dataset-Specific**: The model's performance is optimized for the Frugal AI Challenge dataset and may require further fine-tuning for other datasets.
|
101 |
+
- **Class Imbalance**: While class weighting mitigates imbalance, some underrepresented classes may still have lower performance.
|
102 |
+
- **Sequence Length**: Input text is truncated to a maximum length of 128 tokens, which may result in loss of information for longer texts.
|
103 |
+
|
104 |
+
## Citation
|
105 |
+
If you use this model in your research or application, please cite it as:
|
106 |
+
```
|
107 |
+
@model{ParisNeo_bert_frugal_ai_text_classification,
|
108 |
+
author = {ParisNeo},
|
109 |
+
title = {BERT Fine-Tuned for Frugal AI Challenge - Text Task},
|
110 |
+
year = {2025},
|
111 |
+
publisher = {Hugging Face},
|
112 |
+
url = {https://huggingface.co/ParisNeo/bert-frugal-ai-text-classification}
|
113 |
+
}
|
114 |
+
```
|
115 |
|
116 |
+
## Acknowledgments
|
117 |
+
Special thanks to the Frugal AI Challenge organizers for providing the dataset and fostering innovation in AI research.
|