File size: 3,331 Bytes
3367fc7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8f9996f
 
3367fc7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
011ec8b
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: apache-2.0
datasets:
- BelleGroup/train_2M_CN
- BelleGroup/train_3.5M_CN
- BelleGroup/train_1M_CN
- BelleGroup/train_0.5M_CN
- BelleGroup/school_math_0.25M
language:
- zh
---

## GoGPT

基于中文指令数据微调BLOOM
![img.png](resources/img.png)
> 训练第一轮足够了,后续第二轮和第三轮提升不大

- 🚀多样性指令数据
- 🚀筛选高质量中文数据

| 模型名字       | 参数量    | 模型地址 |
|------------|--------|------|
| gogpt-560m | 5.6亿参数 | 🤗[golaxy/gogpt-560m](https://huggingface.co/golaxy/gogpt-560m) |
| gogpt-3b   | 30亿参数  | 🤗[golaxy/gogpt-3b-bloom](https://huggingface.co/golaxy/gogpt-3b-bloom) |
| gogpt-7b   | 70亿参数  | 🤗[golaxy/gogpt-7b-bloom](https://huggingface.co/golaxy/gogpt-7b-bloom) |


## 测试效果
![img.png](resources/test1.png)
![img.png](resources/test2.png)
![img.png](resources/test3.png)
![img.png](resources/test4.png)
![img.png](resources/test5.png)
![img.png](resources/test6.png)


## TODO
- 进行RLFH训练
- 后续加入中英平行语料

## 感谢

- [@hz大佬-zero_nlp](https://github.com/yuanzhoulvpi2017/zero_nlp)
- [stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Belle数据](https://huggingface.co/BelleGroup)
(base) [searchgpt@worker2 output-bloom-7b]$ cat README.md ^C
(base) [searchgpt@worker2 output-bloom-7b]$ vim README.md 
(base) [searchgpt@worker2 output-bloom-7b]$ cat README.md 
---
license: apache-2.0
datasets:
- BelleGroup/train_2M_CN
- BelleGroup/train_3.5M_CN
- BelleGroup/train_1M_CN
- BelleGroup/train_0.5M_CN
- BelleGroup/school_math_0.25M
language:
- zh
---

## GoGPT

基于中文指令数据微调BLOOM
![img.png](resources/img.png)
> 训练第一轮足够了,后续第二轮和第三轮提升不大

- 🚀多样性指令数据
- 🚀筛选高质量中文数据

| 模型名字       | 参数量    | 模型地址 |
|------------|--------|------|
| gogpt-560m | 5.6亿参数 | 🤗[golaxy/gogpt-560m](https://huggingface.co/golaxy/gogpt-560m) |
| gogpt-3b   | 30亿参数  | 🤗[golaxy/gogpt-3b](https://huggingface.co/golaxy/gogpt-3b) |
| gogpt-7b   | 70亿参数  | 🤗[golaxy/gogpt-7b](https://huggingface.co/golaxy/gogpt-7b) |


## 测试效果
![img.png](resources/test1.png)
![img.png](resources/test2.png)
![img.png](resources/test3.png)
![img.png](resources/test4.png)
![img.png](resources/test5.png)
![img.png](resources/test6.png)


## TODO
- 进行RLFH训练
- 后续加入中英平行语料

## 感谢

- [@hz大佬-zero_nlp](https://github.com/yuanzhoulvpi2017/zero_nlp)
- [stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Belle数据](https://huggingface.co/BelleGroup)

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_golaxy__gogpt-7b-bloom)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 38.61   |
| ARC (25-shot)         | 44.62          |
| HellaSwag (10-shot)   | 62.56    |
| MMLU (5-shot)         | 33.81         |
| TruthfulQA (0-shot)   | 40.61   |
| Winogrande (5-shot)   | 62.9   |
| GSM8K (5-shot)        | 0.0        |
| DROP (3-shot)         | 25.77         |