mjdousti commited on
Commit
4407029
1 Parent(s): ff69d78

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -111,11 +111,11 @@ model = LlamaForCausalLM.from_pretrained(
111
 
112
  ### Evaluating Quantized Models
113
 
114
- | Model | <span style="font-variant:small-caps;">Belebele</span> (Persian) | Fa→En Translation | En→Fa Translation | Model Size | Tokens/sec |
115
- | :----------------------------------------------------------------- | :--------------------------------------------------------------: | :---------------: | :---------------: | :--------: | :--------: |
116
- | <span style="font-variant:small-caps;">PersianMind</span> (`BF16`) | 73.9 | 83.61 | 79.44 | 13.7G | 25.35 |
117
- | <span style="font-variant:small-caps;">PersianMind</span> (`INT8`) | 73.7 | 82.32 | 78.61 | 7.2G | 11.36 |
118
- | <span style="font-variant:small-caps;">PersianMind</span> (`INT4`) | 70.2 | 82.07 | 80.36 | 3.9G | 24.36 |
119
 
120
  We evaluated quantized models in various tasks against the original model.
121
  Specifically, we evaluated all models using the reading comprehension multiple-choice
 
111
 
112
  ### Evaluating Quantized Models
113
 
114
+ | Model | <span style="font-variant:small-caps;">Belebele</span> (Persian) | Fa→En Translation<br>(<span style="font-variant:small-caps;">Comet</span>) | En→Fa Translation<br>(<span style="font-variant:small-caps;">Comet</span>) | Model Size | Tokens/sec |
115
+ | :----------------------------------------------------------------: | :--------------------------------------------------------------: | :------------------------------------------------------------------------: | :------------------------------------------------------------------------: | :--------: | :--------: |
116
+ | <span style="font-variant:small-caps;">PersianMind</span> (`BF16`) | 73.9 | 83.61 | 79.44 | 13.7G | 25.35 |
117
+ | <span style="font-variant:small-caps;">PersianMind</span> (`INT8`) | 73.7 | 82.32 | 78.61 | 7.2G | 11.36 |
118
+ | <span style="font-variant:small-caps;">PersianMind</span> (`INT4`) | 70.2 | 82.07 | 80.36 | 3.9G | 24.36 |
119
 
120
  We evaluated quantized models in various tasks against the original model.
121
  Specifically, we evaluated all models using the reading comprehension multiple-choice