sana-ngu commited on
Commit
af7c479
·
1 Parent(s): cc0ee26

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ ---
5
+ ### BERTweet-large-sexism-detector
6
+ This is a fine-tuned model of BERTweet-large) on the Explainable Detection of Online Sexism (EDOS) dataset. It is intended to be used as a classification model for identifying tweets (0 - not sexist; 1 - sexist).
7
+
8
+ More information about the original pre-trained model can be found [here](https://huggingface.co/docs/transformers/model_doc/bertweet)
9
+
10
+ Classification examples:
11
+
12
+ |Prediction|Tweet|
13
+ |-----|--------|
14
+ |sexist |Every woman wants to be a model. It's codeword for "I get everything for free and people want me" |
15
+ |not sexist |basically I placed more value on her than I should then?|
16
+ # More Details
17
+ For more details about the datasets and eval results, see (we will updated the page with our paper link)
18
+ # How to use
19
+ ```python
20
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer,pipeline
21
+ import torch
22
+ model = AutoModelForSequenceClassification.from_pretrained('NLP-LTU/BERTweet-large-sexism-detector')
23
+ tokenizer = AutoTokenizer.from_pretrained('vinai/bertweet-large')
24
+ classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
25
+ prediction=classifier("Every woman wants to be a model. It's codeword for 'I get everything for free and people want me' ")
26
+ label_pred = 'not sexist' if prediction == 0 else 'sexist'
27
+
28
+ print(label_pred)
29
+ ```