File size: 1,298 Bytes
b806992
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# DistilBERT Fine-Tuned for Sequence Classification

## Model Overview
This is a fine-tuned version of the DistilBERT model designed for sequence classification tasks. It is inspired by the r/AmItheAsshole subreddit, where it has been trained on textual data to assess and classify user-submitted stories.

- **Base Model**: [DistilBERT](https://huggingface.co/distilbert-base-uncased)
- **Fine-Tuned For**: Sequence classification (e.g., sentiment analysis, AITA-type categorization)
- **Dataset**: https://huggingface.co/datasets/MattBoraske/Reddit-AITA-2018-to-2022
- **Task**: Sequence classification with predefined labels.

## Model Details
- **Architecture**: Transformer-based model (DistilBERT)
- **Input Format**: Text sequences
- **Output Format**: Classification labels with confidence scores
- **Labels**:
  - `LABEL_0`: The Asshole
  - `LABEL_1`: Not the Asshole

## Intended Use
This model is intended to provide insights and assessments for user-submitted textual scenarios. It works well for binary classification tasks.

### Example Usage

```python
from transformers import pipeline

classifier = pipeline(
    "text-classification", 
    model="your-username/your-model-name"
)

text = "I did not invite my friend for my wedding. AITA ?"
result = classifier(text)
print(result)