MiriUll commited on
Commit
91ec8a2
·
1 Parent(s): 5915464

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md CHANGED
@@ -1,3 +1,41 @@
1
  ---
2
  license: mit
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - en
5
+ pipeline_tag: text-classification
6
  ---
7
+ # Model Card for Model NegBLEURT
8
+
9
+ This model is a negation-aware version of the BLEURT metric for evaluation of generated text.
10
+
11
+ ### Direct Use
12
+ ```python
13
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
14
+ model_name = "tum-nlp/NegBLEURT"
15
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
16
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
17
+
18
+ references = ["Ray Charles is legendary.", "Ray Charles is legendary."]
19
+ candidates = ["Ray Charles is a legend.", "Ray Charles isn’t legendary."]
20
+
21
+ tokenized = tokenizer(references, cadidates, return_tensors='pt', padding=True)
22
+ print(model(**tokenized).logits)
23
+ # returns scores 0.8409 and 0.2601 for the two candidates
24
+ ```
25
+
26
+
27
+ ## Training Details
28
+
29
+ The model is a fine-tuned version of the [bleurt-tiny](https://github.com/google-research/bleurt/tree/master/bleurt/test_checkpoint) checkpoint from the official BLUERT repository.
30
+ It was fine-tuned on the CANNOT dataset's train split for 500 steps using the [fine-tuning script](https://github.com/google-research/bleurt/blob/master/bleurt/finetune.py) provided by BLEURT.
31
+
32
+
33
+
34
+ ## Citation [optional]
35
+
36
+ Please cite our INLG 2023 paper, if you use our model.
37
+ **BibTeX:**
38
+ tba
39
+
40
+
41
+