Update README.md
Browse files
README.md
CHANGED
@@ -32,11 +32,11 @@ There are three versions of models released. The details are:
|
|
32 |
| [zero-shot-classify-SSTuning-large](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-large) | [roberta-large](https://huggingface.co/roberta-large) | 355M | Medium | Medium | 5.12M |
|
33 |
| [zero-shot-classify-SSTuning-ALBERT](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-ALBERT) | [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) | 235M | High | Low| 5.12M |
|
34 |
|
35 |
-
|
36 |
|
37 |
|
38 |
## Intended uses & limitations
|
39 |
-
The model can be used for zero-shot text classification such sentiment analysis and topic
|
40 |
|
41 |
The number of labels should be 2 ~ 20.
|
42 |
|
|
|
32 |
| [zero-shot-classify-SSTuning-large](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-large) | [roberta-large](https://huggingface.co/roberta-large) | 355M | Medium | Medium | 5.12M |
|
33 |
| [zero-shot-classify-SSTuning-ALBERT](https://huggingface.co/DAMO-NLP-SG/zero-shot-classify-SSTuning-ALBERT) | [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) | 235M | High | Low| 5.12M |
|
34 |
|
35 |
+
Please note that zero-shot-classify-SSTuning-base is trained with more data (20.48M) than the paper, as this will increase the accuracy.
|
36 |
|
37 |
|
38 |
## Intended uses & limitations
|
39 |
+
The model can be used for zero-shot text classification such as sentiment analysis and topic classification. No further finetuning is needed.
|
40 |
|
41 |
The number of labels should be 2 ~ 20.
|
42 |
|