librarian-bot commited on
Commit
76f7675
1 Parent(s): c969b26

Librarian Bot: Update Hugging Face dataset ID

Browse files

This pull request updates the ID of the dataset used to train the model to the new Hub identifier `facebook/anli` (which has been migrated moved from `anli`). We have been working to migrate datasets to their own repositories on the Hub, and this is part of that effort.

Updating the dataset ID in the model card will ensure that the model card is correctly linked to the dataset repository on the Hub. This will also make it easier for people to find your model via the training data used to create it.

This PR comes courtesy of [Librarian Bot](https://huggingface.co/librarian-bot). If you have any feedback, queries, or need assistance, please don't hesitate to reach out to [@davanstrien](https://huggingface.co/davanstrien).

Files changed (1) hide show
  1. README.md +64 -76
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- language:
3
  - multilingual
4
  - zh
5
  - ja
@@ -27,118 +27,106 @@ language:
27
  - he
28
  - sw
29
  - ps
 
30
  tags:
31
  - zero-shot-classification
32
  - text-classification
33
  - nli
34
  - pytorch
35
- license: mit
36
- metrics:
37
- - accuracy
38
  datasets:
39
  - MoritzLaurer/multilingual-NLI-26lang-2mil7
40
  - xnli
41
  - multi_nli
42
- - anli
43
  - fever
44
  - lingnli
45
  - alisawuffles/WANLI
 
 
46
  pipeline_tag: zero-shot-classification
47
- #- text-classification
48
  widget:
49
- - text: "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
50
- candidate_labels: "politics, economy, entertainment, environment"
51
-
52
- model-index: # info: https://github.com/huggingface/hub-docs/blame/main/modelcard.md
53
  - name: DeBERTa-v3-base-xnli-multilingual-nli-2mil7
54
  results:
55
  - task:
56
- type: text-classification # Required. Example: automatic-speech-recognition
57
- name: Natural Language Inference # Optional. Example: Speech Recognition
58
  dataset:
59
- type: multi_nli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
60
- name: MultiNLI-matched # Required. A pretty name for the dataset. Example: Common Voice (French)
61
- split: validation_matched # Optional. Example: test
62
  metrics:
63
- - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
64
- value: 0,857 # Required. Example: 20.90
65
- #name: # Optional. Example: Test WER
66
- verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
67
  - task:
68
- type: text-classification # Required. Example: automatic-speech-recognition
69
- name: Natural Language Inference # Optional. Example: Speech Recognition
70
  dataset:
71
- type: multi_nli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
72
- name: MultiNLI-mismatched # Required. A pretty name for the dataset. Example: Common Voice (French)
73
- split: validation_mismatched # Optional. Example: test
74
  metrics:
75
- - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
76
- value: 0,856 # Required. Example: 20.90
77
- #name: # Optional. Example: Test WER
78
- verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
79
  - task:
80
- type: text-classification # Required. Example: automatic-speech-recognition
81
- name: Natural Language Inference # Optional. Example: Speech Recognition
82
  dataset:
83
- type: anli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
84
- name: ANLI-all # Required. A pretty name for the dataset. Example: Common Voice (French)
85
- split: test_r1+test_r2+test_r3 # Optional. Example: test
86
  metrics:
87
- - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
88
- value: 0,537 # Required. Example: 20.90
89
- #name: # Optional. Example: Test WER
90
- verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
91
  - task:
92
- type: text-classification # Required. Example: automatic-speech-recognition
93
- name: Natural Language Inference # Optional. Example: Speech Recognition
94
  dataset:
95
- type: anli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
96
- name: ANLI-r3 # Required. A pretty name for the dataset. Example: Common Voice (French)
97
- split: test_r3 # Optional. Example: test
98
  metrics:
99
- - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
100
- value: 0,497 # Required. Example: 20.90
101
- #name: # Optional. Example: Test WER
102
- verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
103
  - task:
104
- type: text-classification # Required. Example: automatic-speech-recognition
105
- name: Natural Language Inference # Optional. Example: Speech Recognition
106
  dataset:
107
- type: alisawuffles/WANLI # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
108
- name: WANLI # Required. A pretty name for the dataset. Example: Common Voice (French)
109
- split: test # Optional. Example: test
110
  metrics:
111
- - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
112
- value: 0,732 # Required. Example: 20.90
113
- #name: # Optional. Example: Test WER
114
- verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
115
  - task:
116
- type: text-classification # Required. Example: automatic-speech-recognition
117
- name: Natural Language Inference # Optional. Example: Speech Recognition
118
  dataset:
119
- type: lingnli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
120
- name: LingNLI # Required. A pretty name for the dataset. Example: Common Voice (French)
121
- split: test # Optional. Example: test
122
  metrics:
123
- - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
124
- value: 0,788 # Required. Example: 20.90
125
- #name: # Optional. Example: Test WER
126
- verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
127
  - task:
128
- type: text-classification # Required. Example: automatic-speech-recognition
129
- name: Natural Language Inference # Optional. Example: Speech Recognition
130
  dataset:
131
- type: fever-nli # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
132
- name: fever-nli # Required. A pretty name for the dataset. Example: Common Voice (French)
133
- split: test # Optional. Example: test
134
  metrics:
135
- - type: accuracy # Required. Example: wer. Use metric id from https://hf.co/metrics
136
- value: 0,761 # Required. Example: 20.90
137
- #name: # Optional. Example: Test WER
138
- verified: false # Optional. If true, indicates that evaluation was generated by Hugging Face (vs. self-reported).
139
-
140
-
141
-
142
  ---
143
  # Model card for mDeBERTa-v3-base-xnli-multilingual-nli-2mil7
144
 
 
1
  ---
2
+ language:
3
  - multilingual
4
  - zh
5
  - ja
 
27
  - he
28
  - sw
29
  - ps
30
+ license: mit
31
  tags:
32
  - zero-shot-classification
33
  - text-classification
34
  - nli
35
  - pytorch
 
 
 
36
  datasets:
37
  - MoritzLaurer/multilingual-NLI-26lang-2mil7
38
  - xnli
39
  - multi_nli
40
+ - facebook/anli
41
  - fever
42
  - lingnli
43
  - alisawuffles/WANLI
44
+ metrics:
45
+ - accuracy
46
  pipeline_tag: zero-shot-classification
 
47
  widget:
48
+ - text: Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU
49
+ candidate_labels: politics, economy, entertainment, environment
50
+ model-index:
 
51
  - name: DeBERTa-v3-base-xnli-multilingual-nli-2mil7
52
  results:
53
  - task:
54
+ type: text-classification
55
+ name: Natural Language Inference
56
  dataset:
57
+ name: MultiNLI-matched
58
+ type: multi_nli
59
+ split: validation_matched
60
  metrics:
61
+ - type: accuracy
62
+ value: 0,857
63
+ verified: false
 
64
  - task:
65
+ type: text-classification
66
+ name: Natural Language Inference
67
  dataset:
68
+ name: MultiNLI-mismatched
69
+ type: multi_nli
70
+ split: validation_mismatched
71
  metrics:
72
+ - type: accuracy
73
+ value: 0,856
74
+ verified: false
 
75
  - task:
76
+ type: text-classification
77
+ name: Natural Language Inference
78
  dataset:
79
+ name: ANLI-all
80
+ type: anli
81
+ split: test_r1+test_r2+test_r3
82
  metrics:
83
+ - type: accuracy
84
+ value: 0,537
85
+ verified: false
 
86
  - task:
87
+ type: text-classification
88
+ name: Natural Language Inference
89
  dataset:
90
+ name: ANLI-r3
91
+ type: anli
92
+ split: test_r3
93
  metrics:
94
+ - type: accuracy
95
+ value: 0,497
96
+ verified: false
 
97
  - task:
98
+ type: text-classification
99
+ name: Natural Language Inference
100
  dataset:
101
+ name: WANLI
102
+ type: alisawuffles/WANLI
103
+ split: test
104
  metrics:
105
+ - type: accuracy
106
+ value: 0,732
107
+ verified: false
 
108
  - task:
109
+ type: text-classification
110
+ name: Natural Language Inference
111
  dataset:
112
+ name: LingNLI
113
+ type: lingnli
114
+ split: test
115
  metrics:
116
+ - type: accuracy
117
+ value: 0,788
118
+ verified: false
 
119
  - task:
120
+ type: text-classification
121
+ name: Natural Language Inference
122
  dataset:
123
+ name: fever-nli
124
+ type: fever-nli
125
+ split: test
126
  metrics:
127
+ - type: accuracy
128
+ value: 0,761
129
+ verified: false
 
 
 
 
130
  ---
131
  # Model card for mDeBERTa-v3-base-xnli-multilingual-nli-2mil7
132