MoritzLaurer commited on
Commit
42509fb
·
1 Parent(s): fd230ca

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - text-classification
6
+ - zero-shot-classification
7
+ metrics:
8
+ - accuracy
9
+ widget:
10
+ - text: "I first thought that I liked the movie, but upon second thought the movie was actually disappointing. [SEP] The movie was good."
11
+
12
+ ---
13
+ # MiniLM-L6-mnli-fever-docnli-ling-2c
14
+ ## Model description
15
+ This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
16
+
17
+ It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to create more training data.
18
+
19
+ The base model is MiniLM-L6 from Microsoft. Which is very fast, but a bit less accurate than other models.
20
+
21
+ ## Intended uses & limitations
22
+ #### How to use the model
23
+ ```python
24
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
25
+ import torch
26
+
27
+ model_name = "MoritzLaurer/MiniLM-L6-mnli-fever-docnli-ling-2c"
28
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
29
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
30
+
31
+ premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing."
32
+ hypothesis = "The movie was good."
33
+
34
+ input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
35
+ output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
36
+ prediction = torch.softmax(output["logits"][0], -1).tolist()
37
+ label_names = ["entailment", "neutral", "contradiction"]
38
+ prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
39
+ print(prediction)
40
+ ```
41
+ ### Training data
42
+ This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
43
+
44
+ ### Training procedure
45
+ MiniLM-L6-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters.
46
+ ```
47
+ training_args = TrainingArguments(
48
+ num_train_epochs=3, # total number of training epochs
49
+ learning_rate=2e-05,
50
+ per_device_train_batch_size=32, # batch size per device during training
51
+ per_device_eval_batch_size=32, # batch size for evaluation
52
+ warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
53
+ weight_decay=0.06, # strength of weight decay
54
+ fp16=True # mixed precision training
55
+ )
56
+ ```
57
+ ### Eval results
58
+ The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI. The metric used is accuracy.
59
+
60
+ mnli-m | mnli-mm | fever-nli | anli-all | anli-r3
61
+ ---------|----------|---------|----------|----------
62
+ (to upload)
63
+
64
+
65
+ ## Limitations and bias
66
+ Please consult the original MiniLM paper and literature on different NLI datasets for potential biases.
67
+
68
+ ### BibTeX entry and citation info
69
+ If you want to cite this model, please cite the original MiniLM paper, the respective NLI datasets and include a link to this model on the Hugging Face hub.