Adding Evaluation Results

#1
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -117,3 +117,17 @@ Current evals out of the Pygmalion-13b model: <br>
117
  The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope.
118
 
119
  As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
  The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope.
118
 
119
  As such, it was **not** fine-tuned to be safe and harmless: the base model _and_ this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
120
+
121
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
122
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TehVenom__Pygmalion-13b-Merged)
123
+
124
+ | Metric | Value |
125
+ |-----------------------|---------------------------|
126
+ | Avg. | 44.8 |
127
+ | ARC (25-shot) | 56.48 |
128
+ | HellaSwag (10-shot) | 80.02 |
129
+ | MMLU (5-shot) | 42.93 |
130
+ | TruthfulQA (0-shot) | 35.86 |
131
+ | Winogrande (5-shot) | 75.53 |
132
+ | GSM8K (5-shot) | 0.08 |
133
+ | DROP (3-shot) | 22.67 |