Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
Update src/about.py
Browse files- src/about.py +3 -2
src/about.py
CHANGED
@@ -73,8 +73,9 @@ While outstanding LLM models are being released competitively, most of them are
|
|
73 |
If the icon is "?", it indicates that there is insufficient information about the model.
|
74 |
Please provide information about the model through an issue! π€©
|
75 |
|
76 |
-
Note :
|
77 |
-
|
|
|
78 |
|
79 |
## How it works
|
80 |
π We evaluate models using the impressive [LightEval](https://github.com/huggingface/lighteval), a unified and straightforward framework from the HuggingFace Eval Team to test and assess causal language models on a large number of different evaluation tasks.
|
|
|
73 |
If the icon is "?", it indicates that there is insufficient information about the model.
|
74 |
Please provide information about the model through an issue! π€©
|
75 |
|
76 |
+
Note 1 : We reserve the right to correct any incorrect tags/icons after manual verification to ensure the accuracy and reliability of the leaderboard.
|
77 |
+
|
78 |
+
Note :warning: : Some models might be widely discussed as subjects of caution by the community, implying that users should exercise restraint when using them. Models that have used the evaluation set for training to achieve a high leaderboard ranking, among others, may be selected as subjects of caution and might result in their deletion from the leaderboard.
|
79 |
|
80 |
## How it works
|
81 |
π We evaluate models using the impressive [LightEval](https://github.com/huggingface/lighteval), a unified and straightforward framework from the HuggingFace Eval Team to test and assess causal language models on a large number of different evaluation tasks.
|