File size: 3,035 Bytes
0d9df74
 
258ae09
 
 
 
 
 
 
 
 
0d9df74
 
 
 
258ae09
0d9df74
 
258ae09
0d9df74
258ae09
0d9df74
258ae09
 
0d9df74
258ae09
 
0d9df74
258ae09
0d9df74
258ae09
 
0d9df74
258ae09
0d9df74
258ae09
 
0d9df74
 
 
258ae09
0d9df74
258ae09
0d9df74
258ae09
 
 
 
0d9df74
 
258ae09
0d9df74
 
 
 
 
 
258ae09
0d9df74
258ae09
0d9df74
258ae09
 
 
 
 
 
0d9df74
258ae09
0d9df74
258ae09
0d9df74
258ae09
 
 
0d9df74
258ae09
0d9df74
258ae09
0d9df74
258ae09
0d9df74
 
 
 
 
 
258ae09
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
library_name: transformers
tags:
- intent-classificaton
- text-classification
license: apache-2.0
language:
- en
base_model:
- google-bert/bert-base-uncased
pipeline_tag: text-classification
---

# Model Card for Model ID

This is a fine-tuned BERT-based model for intent classification, capable of categorizing intents into 82 distinct labels. It was trained on a consolidated dataset of multilingual intent datasets.


## How to Get Started with the Model

Use the code below to get started with the model.

```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline

model = AutoModelForSequenceClassification.from_pretrained("yeniguno/bert-uncased-intent-classification")
tokenizer = AutoTokenizer.from_pretrained("yeniguno/bert-uncased-intent-classification")

pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)

text = "Play the song, Sam."
prediction = pipe(text)

print(prediction)

# [{'label': 'play_music', 'score': 0.9997674822807312}]
```

## Uses

This model is intended for:

Natural Language Understanding (NLU) tasks. Classifying user intents for applications such as:

- Voice assistants
- Chatbots
- Customer support automation
- Conversational AI systems

## Bias, Risks, and Limitations
The model's performance may degrade on intents that are underrepresented in the training data. Not optimized for languages other than English. Domain-specific intents not included in the dataset may require additional fine-tuning.


## Training Details

### Training Data

his model was trained on a combination of intent datasets from various sources:

Datasets Used:

- mteb/amazon_massive_intent
- mteb/mtop_intent
- sonos-nlu-benchmark/snips_built_in_intents
- Mozilla/smart_intent_dataset
- Bhuvaneshwari/intent_classification
- clinc/clinc_oos

Each dataset was preprocessed, and intent labels were consolidated into 82 unique classes.

Dataset Sizes:

- Train size: 138228
- Validation size: 17279
- Test size: 17278

### Training Procedure

The model was fine-tuned with the following hyperparameters:

Base Model: bert-base-uncased Learning Rate: 3e-5 Batch Size: 32 Epochs: 4 Weight Decay: 0.01 Evaluation Strategy: Per epoch Mixed Precision: FP32 Hardware: A100


## Evaluation

### Results

#### Training and Validation:
| Epoch | Training Loss | Validation Loss | Accuracy | F1 Score | Precision | Recall |
|-------|---------------|-----------------|----------|----------|-----------|--------|
| 1     | 0.1143        | 0.1014          | 97.38%   | 97.33%   | 97.36%    | 97.38% |
| 2     | 0.0638        | 0.0833          | 97.78%   | 97.79%   | 97.83%    | 97.78% |
| 3     | 0.0391        | 0.0946          | 97.98%   | 97.98%   | 97.99%    | 97.98% |
| 4     | 0.0122        | 0.1013          | 98.04%   | 98.04%   | 98.05%    | 98.04% |

#### Test Results:
| Metric      | Value    |
|-------------|----------|
| **Loss**    | 0.0814   |
| **Accuracy**| 98.37%   |
| **F1 Score**| 98.37%   |
| **Precision**| 98.38%  |
| **Recall**  | 98.37%   |