wenbopan commited on
Commit
5794441
1 Parent(s): 1cff19b
Files changed (1) hide show
  1. README.md +26 -27
README.md CHANGED
@@ -12,44 +12,44 @@ language:
12
 
13
  # Fi-9B
14
 
15
- Fi-9B is an improved [Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1).
16
-
17
- Compare to Yi-9B-200K, Fi-9B gains greater capability at various downstream tasks and long-context modeling thanks to large-scale synthestic data in Fusang-V1.
18
 
19
  ## Performance
20
 
21
- ### Fact-based Evaluation
22
-
23
- Fi is competitive amongst all models at ~9B size range:
24
 
25
- | **Metric** | **winogrande** | **hellaswag** | **truthfulqa** | **ai2_arc** |
26
- | ---------- | -------------- | ------------- | -------------- | ----------- |
27
- | Yi-9B-200K | 0.7167 | 0.5672 | 0.3380 | 0.6925 |
28
- | Fi-9B-200K | 0.7111 | **0.5728** | **0.4086** | **0.7258** |
29
 
30
- ### Long-context Modeling
 
 
 
31
 
32
- Fi make even further progresses than Yi-9B-200K, which is already impressive in terms of handling long-range tasks:
33
 
34
- Results on LongBench:
 
 
 
35
 
36
- | **Name** | **Average_zh** | **Average_en** | **Code Completion** |
37
- |-----------------|----------------|----------------|---------------------|
38
- | Yi-9B-200K | 30.288 | 36.7071 | 72.2 |
39
- | Fi-9B-200K | **41.092** | **40.9536** | 46.0 |
40
 
41
- Detailed score decomposition on LongBench
 
 
 
42
 
43
- | **Name** | **Few-shot Learning_en** | **Synthetic Tasks_en** | **Single-Doc QA_en** | **Multi-Doc QA_en** | **Summarization_en** | **Few-shot Learning_zh** | **Synthetic Tasks_zh** | **Single-Doc QA_zh** | **Multi-Doc QA_zh** | **Summarization_zh** |
44
- |-----------------|--------------------------|------------------------|----------------------|---------------------|----------------------|--------------------------|------------------------|----------------------|---------------------|----------------------|
45
- | Yi-9B-200K | 60.6 | 22.8 | 30.9 | 38.9 | 25.8 | 46.5 | 28.0 | 49.6 | 17.7 | 9.7 |
46
- | Fi-9B-200K | **63.8** | **40.2** | **36.2** | 38.0 | **26.3** | 30.0 | **75.1** | **55.6** | **30.7** | **14.1** |
47
 
48
  <!--### Performance on Preference TODO-->
49
 
50
- ### Bilingual Ability
51
 
52
- Coming soon...
 
 
 
53
 
54
  ## How to use Fi
55
 
@@ -57,6 +57,5 @@ Coming soon...
57
 
58
  ## Current Limitations
59
 
60
- This version of Fi-9B may not be able to stop generation in some scenarios. I will fix that soon.
61
-
62
- Compare to the original Yi-9B-200K, Fi-9B has degraded ability for code completion. This may due to lack of raw code data during instruction tuning.
 
12
 
13
  # Fi-9B
14
 
15
+ Fi-9B is an improved [Yi-9B-200K](https://huggingface.co/01-ai/Yi-9B-200K) with extensive instruction tuning on [Fusang-V1](https://huggingface.co/datasets/wenbopan/Fusang-v1). Compared to Yi-9B-200K, Fi-9B has gained greater capability in various downstream tasks and long-context modeling thanks to the large-scale synthetic data in Fusang-V1.
 
 
16
 
17
  ## Performance
18
 
19
+ Fi-9B enhances its ability compared to Yi-9B-200K in most dimensions, especially in long-range modeling and bilingual (English, Chinese) understanding. Fi is competitive among all open-sourced models at around 9B parameters. Fi-9B is good at both factual tasks and preferred by LLM-judges.
 
 
20
 
21
+ ### Fact-based Evaluation (Open LLM Leaderboard)
 
 
 
22
 
23
+ | **Metric** | **winogrande** | **hellaswag** | **truthfulqa** | **ai2_arc** |
24
+ |-----------------|----------------|---------------|----------------|-------------|
25
+ | **Yi-9B-200K** | 71.67 | 56.72 | 33.80 | 69.25 |
26
+ | **Fi-9B-200K** | 71.11 | **57.28** | **40.86** | **72.58** |
27
 
28
+ ### Long-context Modeling (LongBench)
29
 
30
+ | **Name** | **Average_zh** | **Average_en** | **Code Completion** |
31
+ |----------------|----------------|----------------|---------------------|
32
+ | **Yi-9B-200K** | 30.288 | 36.7071 | 72.2 |
33
+ | **Fi-9B-200K** | **41.092** | **40.9536** | 46.0 |
34
 
35
+ <details>
36
+ <summary>Score breakdown</summary>
 
 
37
 
38
+ | **Name** | **Few-shot Learning_en** | **Synthetic Tasks_en** | **Single-Doc QA_en** | **Multi-Doc QA_en** | **Summarization_en** | **Few-shot Learning_zh** | **Synthetic Tasks_zh** | **Single-Doc QA_zh** | **Multi-Doc QA_zh** | **Summarization_zh** |
39
+ |----------------|--------------------------|------------------------|----------------------|---------------------|----------------------|--------------------------|------------------------|----------------------|---------------------|----------------------|
40
+ | **Yi-9B-200K** | 60.6 | 22.8 | 30.9 | 38.9 | 25.8 | 46.5 | 28.0 | 49.6 | 17.7 | 9.7 |
41
+ | **Fi-9B-200K** | **63.8** | **40.2** | **36.2** | 38.0 | **26.3** | 30.0 | **75.1** | **55.6** | **30.7** | **14.1** |
42
 
43
+ </details>
 
 
 
44
 
45
  <!--### Performance on Preference TODO-->
46
 
47
+ ### Bilingual Ability (CMMLU & MMLU)
48
 
49
+ | **Name** | **CMMLU** |
50
+ |----------------|-----------|
51
+ | **Yi-9B-200K** | 71.97 |
52
+ | **Fi-9B-200K** | 73.28 |
53
 
54
  ## How to use Fi
55
 
 
57
 
58
  ## Current Limitations
59
 
60
+ - This version of Fi-9B may not be able to stop generation in some scenarios. I will fix that soon.
61
+ - Compared to the original Yi-9B-200K, Fi-9B has degraded ability for code completion. This may be due to the lack of raw code data during instruction tuning.