Adding Evaluation Results

#1
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -189,4 +189,17 @@ Please read this disclaimer carefully before using the large language model prov
189
  - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
190
  - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
191
 
192
- By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
189
  - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
190
  - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
191
 
192
+ By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
193
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
194
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_h2oai__h2ogpt-gm-oasst1-en-1024-open-llama-7b-preview-400bt)
195
+
196
+ | Metric | Value |
197
+ |-----------------------|---------------------------|
198
+ | Avg. | 34.96 |
199
+ | ARC (25-shot) | 41.3 |
200
+ | HellaSwag (10-shot) | 62.44 |
201
+ | MMLU (5-shot) | 27.55 |
202
+ | TruthfulQA (0-shot) | 42.0 |
203
+ | Winogrande (5-shot) | 64.56 |
204
+ | GSM8K (5-shot) | 1.52 |
205
+ | DROP (3-shot) | 5.38 |