Update README.md
Browse files
README.md
CHANGED
@@ -60,7 +60,6 @@ pipeline_tag: text-generation
|
|
60 |
- We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`. We used the lm-evaluation-harness repository, specifically commit `b281b0921b636bc36ad05c0b0b0763bd6dd43463`. We can reproduce the evaluation environments using the command below:
|
61 |
|
62 |
**Main Results**
|
63 |
-
|
64 |
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|
65 |
|-----------------------------------------------|---------|-------|-----------|-------|------------|
|
66 |
| llama-65b-instruct (***Ours***, *Local Reproduction*) | **69.4** | **67.6** | **86.5** | **64.9** | **58.8** |
|
|
|
60 |
- We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`. We used the lm-evaluation-harness repository, specifically commit `b281b0921b636bc36ad05c0b0b0763bd6dd43463`. We can reproduce the evaluation environments using the command below:
|
61 |
|
62 |
**Main Results**
|
|
|
63 |
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|
64 |
|-----------------------------------------------|---------|-------|-----------|-------|------------|
|
65 |
| llama-65b-instruct (***Ours***, *Local Reproduction*) | **69.4** | **67.6** | **86.5** | **64.9** | **58.8** |
|