stgzr commited on
Commit
f0c363f
1 Parent(s): a78918e

update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -4
README.md CHANGED
@@ -71,7 +71,7 @@ We evaluate our model on several academic benchmarks then compare with other sim
71
  | HellaSwag(0-shot) | 82.03 | 81.57 | 83.32 |
72
 
73
 
74
- **Note:** To facilitate reproduction, the results of common benchmarks are generated by [OpenCompass](https://github.com/open-compass/opencompass) except humaneval and mbpp as we experience code timeout and postprocess issues. Besides, Usmle and CFA is evaluated using internal evaluation scripts.
75
 
76
  ### Chat Model
77
 
@@ -85,9 +85,6 @@ We present the performance results of our chat model and other LLM on various st
85
  | Arena-Hard | 24.2 | 42.6 | 43.1 |
86
  | GSM8K | 81.42 | 79.45 | 84.04 |
87
  | MATH | 42.28 | 54.06 | 51.48 |
88
- | USMLE | 58.70 | 55.84 | 79.70 |
89
- | CFA 2.0 | 35.5 | 42.5 | 62.75 |
90
-
91
 
92
  ### Long Context
93
 
 
71
  | HellaSwag(0-shot) | 82.03 | 81.57 | 83.32 |
72
 
73
 
74
+ **Note:** To facilitate reproduction, the results of common benchmarks are generated by [OpenCompass](https://github.com/open-compass/opencompass) except humaneval and mbpp as we experience code timeout and postprocess issues.
75
 
76
  ### Chat Model
77
 
 
85
  | Arena-Hard | 24.2 | 42.6 | 43.1 |
86
  | GSM8K | 81.42 | 79.45 | 84.04 |
87
  | MATH | 42.28 | 54.06 | 51.48 |
 
 
 
88
 
89
  ### Long Context
90