yukiontheiceberg
commited on
Commit
·
fa1322b
1
Parent(s):
57b9d49
Update README.md
Browse files
README.md
CHANGED
@@ -20,14 +20,14 @@ Despite being trained on a smaller dataset of 1.4 trillion tokens—compared to
|
|
20 |
It demonstrates superior performance in benchmarks like MMLU, HumanEval, and MBPP.
|
21 |
By comparing CrystalCoder with other similar work, CrystalCoder is quite balance on language and coding tasks.
|
22 |
|
23 |
-
|
|
24 |
-
|
25 |
-
| Mistral 7B
|
26 |
-
| **CrystalCoder 7B** | 1.4T | 47.01 | 71.97
|
27 |
-
| CodeLlaMA 7B
|
28 |
-
| OpenLLaMA v2 7B
|
29 |
-
| LLaMA 2 7B
|
30 |
-
| StarCoder-15B
|
31 |
|
32 |
## About LLM360
|
33 |
LLM360 is an initiative for comprehensive and fully open-sourced LLMs,
|
|
|
20 |
It demonstrates superior performance in benchmarks like MMLU, HumanEval, and MBPP.
|
21 |
By comparing CrystalCoder with other similar work, CrystalCoder is quite balance on language and coding tasks.
|
22 |
|
23 |
+
| Model | Trained Tokens | Avg. of Avg. | Language Avg. | Coding Avg. | ARC | HellaSwag | MMLU (5-shot) | TruthfulQA | HumanEval (pass@1) | MBPP (pass@1) |
|
24 |
+
|:-------------------:|:--------------:|:------------:|:-------------:|:-----------:|:-----:|:---------:|:-------------:|:----------:|:------------------:|:-------------:|
|
25 |
+
| Mistral 7B | - | 48.68 | 62.40 | 33.95 | 59.98 | 83.31 | 64.16 | 42.15 | 29.12 | 38.78 |
|
26 |
+
| **CrystalCoder 7B** | 1.4T | 41.65 | 50.92 | 32.38 | 47.01 | 71.97 | 48.78 | 35.91 | 28.38 | 36.38 |
|
27 |
+
| CodeLlaMA 7B | 2.5T | 39.94 | 42.42 | 37.45 | 39.93 | 60.80 | 31.12 | 37.82 | 33.50 | 41.40 |
|
28 |
+
| OpenLLaMA v2 7B | 1T | 38.10 | 48.18 | 28.01 | 43.60 | 72.20 | 41.29 | 35.54 | 15.32 | 12.69 |
|
29 |
+
| LLaMA 2 7B | 2T | 34.98 | 53.39 | 16.57 | 53.07 | 77.74 | 43.80 | 38.98 | 13.05 | 20.09 |
|
30 |
+
| StarCoder-15B | 1.03 | - | - | 38.46 | - | - | - | - | 33.63 | 43.28 |
|
31 |
|
32 |
## About LLM360
|
33 |
LLM360 is an initiative for comprehensive and fully open-sourced LLMs,
|