Update README.md
Browse files
README.md
CHANGED
@@ -58,23 +58,29 @@ pipeline_tag: text-generation
|
|
58 |
|
59 |
**Overview**
|
60 |
- We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`. We used the lm-evaluation-harness repository, specifically commit `b281b0921b636bc36ad05c0b0b0763bd6dd43463`. We can reproduce the evaluation environments using the command below:
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
|
67 |
**Main Results**
|
68 |
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|
69 |
|-----------------------------------------------|---------|-------|-----------|-------|------------|
|
70 |
| llama-65b-instruct (***Ours***, ***Local Reproduction***) | **69.4** | **67.6** | **86.5** | **64.9** | **58.8** |
|
71 |
-
| llama-30b-instruct-2048 (***Ours***) | 64.7 | 58.3 | 82.5 | 61.4 | 56.5 |
|
72 |
| falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
|
73 |
-
| llama-30b-instruct (***Ours***) | 63.2 | 56.7 | 84.0 | 59.0 | 53.1 |
|
74 |
| llama-65b | 62.1 | 57.6 | 84.3 | 63.4 | 43.0 |
|
75 |
|
76 |
-
*Experimental results based on the Open LLM Leaderboard*
|
77 |
-
|
78 |
## Ethical Issues
|
79 |
|
80 |
**Ethical Considerations**
|
|
|
58 |
|
59 |
**Overview**
|
60 |
- We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`. We used the lm-evaluation-harness repository, specifically commit `b281b0921b636bc36ad05c0b0b0763bd6dd43463`. We can reproduce the evaluation environments using the command below:
|
61 |
+
|
62 |
+
** Scripts**
|
63 |
+
- prepare evaluation environments:
|
64 |
+
```
|
65 |
+
# clone the repository
|
66 |
+
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
|
67 |
+
|
68 |
+
# check out the specific commit:
|
69 |
+
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
|
70 |
+
|
71 |
+
# change to the repository directory:
|
72 |
+
cd lm-evaluation-harness
|
73 |
+
```
|
74 |
|
75 |
**Main Results**
|
76 |
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
|
77 |
|-----------------------------------------------|---------|-------|-----------|-------|------------|
|
78 |
| llama-65b-instruct (***Ours***, ***Local Reproduction***) | **69.4** | **67.6** | **86.5** | **64.9** | **58.8** |
|
79 |
+
| llama-30b-instruct-2048 (***Ours***, ***Open LLM Leaderboard***) | 64.7 | 58.3 | 82.5 | 61.4 | 56.5 |
|
80 |
| falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
|
81 |
+
| llama-30b-instruct (***Ours***, ***Open LLM Leaderboard***) | 63.2 | 56.7 | 84.0 | 59.0 | 53.1 |
|
82 |
| llama-65b | 62.1 | 57.6 | 84.3 | 63.4 | 43.0 |
|
83 |
|
|
|
|
|
84 |
## Ethical Issues
|
85 |
|
86 |
**Ethical Considerations**
|