Macaroni 7B
This is an experimental merge of pre-trained mistral language models with fblgit/UNA-TheBeagle-7b-v1.
Disclaimer
No Warranty: The Model is provided on an "AS IS" basis, without warranty of any kind. The entire risk as to the quality, performance and use of The Model is with the user.
Limitation of Liability: In no event shall the creator(s) of The Model be liable for any claim, damages, or other liability, whether in an action of contract, tort or otherwise, arising from, out of, or in connection with The Model or the use or other dealings in The Model.
Accuracy and Risks: The creator(s) do not warrant that The Model is free from errors or inaccuracies and disclaim any responsibility for any harm resulting from the use of The Model.
Use at Your Own Risk: Users are solely responsible for any consequences resulting from the use of The Model, including but not limited to any changes made to The Model by the user or the results produced by The Model.
Compliance with Laws: Users are solely responsible for ensuring that their use of The Model complies with all applicable laws, regulations, and policies.
Ethical Use: Users are encouraged to use The Model ethically and responsibly. The creator(s) disclaim any responsibility for misuse or unethical use of The Model.
Modifications: Any modifications made to The Model by third parties are the sole responsibility of the party making the modifications. The original creator(s) of The Model shall not be responsible for any modifications made by third parties.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 74.60 |
AI2 Reasoning Challenge (25-Shot) | 73.12 |
HellaSwag (10-Shot) | 88.17 |
MMLU (5-Shot) | 64.58 |
TruthfulQA (0-shot) | 68.76 |
Winogrande (5-shot) | 84.37 |
GSM8k (5-shot) | 68.61 |
- Downloads last month
- 1,069
Model tree for andrijdavid/macaroni-7b
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard73.120
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard88.170
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard64.580
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard68.760
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard84.370
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard68.610