commit files to HF hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en`
|
| 2 |
+
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
|
| 3 |
+
Following table shows a summary of the trimming process.
|
| 4 |
+
|
| 5 |
+
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-en | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en |
|
| 6 |
+
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------|
|
| 7 |
+
| parameter_size_full | 278,045,955 | 219,090,435 |
|
| 8 |
+
| parameter_size_embedding | 192,001,536 | 133,046,016 |
|
| 9 |
+
| vocab_size | 250,002 | 173,237 |
|
| 10 |
+
| compression_rate_full | 100.0 | 78.8 |
|
| 11 |
+
| compression_rate_embedding | 100.0 | 69.29 |
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
Following table shows the parameter used to trim vocabulary.
|
| 15 |
+
|
| 16 |
+
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|
| 17 |
+
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|:--------------------|----------------:|
|
| 18 |
+
| en | vocabtrimmer/mc4_validation | text | en | validation | | 2 |
|