AuriAetherwiing commited on
Commit
611a9d3
1 Parent(s): 325e4a5
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -31,7 +31,7 @@ model-index:
31
  <p>Model is available for inference on <a href=https://featherless.ai/models/EVA-UNIT-01/EVA-Qwen2.5-72B-v0.0>Featherless.AI</a></p
32
 
33
  <p>Note: using quantized KV cache with Qwen2.5 <b>is not recommended</b> and can lead to degraded output quality. On the other hand, Qwen's KV cache is already light enough, so using f16 for it shouldn't be problematic.</p>
34
- <p>Note #2: due to some unexepected effects of data normalization, some artifacting in form of randomly appearring sequence of <code>—</code> can appear in outputs sometimes, if penalties are too high. To avoid it, ban token number <code>158</code>. Thanks to Cahvay/ALK for discovering this fix.</p>
35
  <p>
36
  <p>Prompt format is ChatML.</p><br>
37
  <h3>Recommended sampler values:</h3>
 
31
  <p>Model is available for inference on <a href=https://featherless.ai/models/EVA-UNIT-01/EVA-Qwen2.5-72B-v0.0>Featherless.AI</a></p
32
 
33
  <p>Note: using quantized KV cache with Qwen2.5 <b>is not recommended</b> and can lead to degraded output quality. On the other hand, Qwen's KV cache is already light enough, so using f16 for it shouldn't be problematic.</p>
34
+ <p>Note #2: due to some unexpected effects of data normalization, some artifacting in form of randomly appearring sequence of <code>—</code> can appear in outputs sometimes, if penalties are too high. To avoid it, ban token number <code>158</code>. Thanks to Cahvay/ALK for discovering this fix!</p>
35
  <p>
36
  <p>Prompt format is ChatML.</p><br>
37
  <h3>Recommended sampler values:</h3>