Update README.md
Browse files
README.md
CHANGED
@@ -58,6 +58,11 @@ pipeline_tag: text-generation
|
|
58 |
|
59 |
**Overview**
|
60 |
- We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`. We used the lm-evaluation-harness repository, specifically commit `b281b0921b636bc36ad05c0b0b0763bd6dd43463`. We can reproduce the evaluation environments using the command below:
|
|
|
|
|
|
|
|
|
|
|
61 |
|
62 |
**Main Results**
|
63 |
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|
|
|
58 |
|
59 |
**Overview**
|
60 |
- We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`. We used the lm-evaluation-harness repository, specifically commit `b281b0921b636bc36ad05c0b0b0763bd6dd43463`. We can reproduce the evaluation environments using the command below:
|
61 |
+
```
|
62 |
+
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
|
63 |
+
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
|
64 |
+
cd lm-evaluation-harness
|
65 |
+
```
|
66 |
|
67 |
**Main Results**
|
68 |
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|