## 数据说明 行业模型在推动企业智能化转型和创新发展中发挥着至关重要的作用。高质量的行业数据是提升大模型性能和实现行业应用落地的关键。然而,目前用于行业模型训练的数据集普遍存在数据量少、质量低、专业性不足等问题。 在6月份我们发布了[IndustryCorpus](https://huggingface.co/datasets/BAAI/Industry-Instruction)数据集: 我们在该数据集的基础上进行了进一步的升级迭代,迭代内容如下: - 数据来源:在原有数据基础上,引入了更多的高质量数据源,如pile,bigcode,open-web-math等数学和代码数据 - 更新行业类目体系:为了更贴合行业的划分体系,我们结合国家统计局制定的国民经济行业分类体系(20门类)和世界知识体系进行了行业类目的重新设计,设置了31个行业类目,基本覆盖当前在主流行业 - 数据语义质量筛选:我们将IndustryCorpus高质量数据制作方案下放,在IndustryCorpus2.0开源数据使用规则过滤+模型过滤的方案,整体极大提升了数据的质量; - 数据质量分层:为了进一步整合不同层次的数据质量,我们以质量评估分数为基准对数据进行分层正整理,划分出high,middle,low三个层次的数据 - 数据大小:中文1TB, 英文2.2TB 数据的处理流程与IndustryCorpus保持一致 ![数据处理流程图](./img/data_pipelien.jpg) ##数据透视 ### 行业数据分布 经过全流程处理后的各行业数据磁盘大小如下所示 | 行业类别 | 数据大小(GB) | 行业类别 | 数据大小(GB) | | :----------: | :--------------: | :--------: | :--------------: | | 编程 | 11.0 | 新闻 | 51.0 | | 生物医药 | 61.7 | 石油化工 | 40.2 | | 医学健康-心理中医 | 271.7 | 航空航天 | 38.6 | | 旅游地理 | 64.0 | 采矿 | 8.9 | | 法律司法 | 238.5 | 金融经济 | 145.8 | | 数学-统计学 | 156.7 | 文学情感 | 105.5 | | 其他信息服务_信息安全 | 1.8 | 交通运输 | 40.5 | | 消防安全_食品安全 | 4.3 | 科技_科学研究 | 101.6 | | 汽车 | 39.3 | 水利_海洋 | 20.2 | | 住宿-餐饮-酒店 | 29.6 | 计算机-通信 | 157.8 | | 影视娱乐 | 209.4 | 学科教育 | 340.9 | | 房地产-建筑 | 105.2 | 人工智能-机器学习 | 7.7 | | 电力能源 | 68.7 | 时政-政务-行政 | 271.5 | | 农林牧渔 | 111.9 | 体育 | 262.5 | | 游戏 | 37.6 | 其他制造 | 47.2 | | 其他 | 188.6 | | | | 合计(GB) | 3276G | | | 汇总数据集中的行业数据分布图如下 ![image-20240919112715282](./img/data_ratio.png) 从分布图上可以看到,对于学科教育,体育,时事政治,法律,医学健康,影视娱乐这几个行业占据了整体数据的大部分,这几个行业的数据广泛存在于互联网和教材当中,在全部的语料中占比较高是符合预期的;值得一提的是,由于我们定向补充了数学的数据,可以看到数学的数据占比同样较高,这与数学互联网语料的数据占比是不一致的。 ### 数据质量分层 我们按照数据质量对整个数据进行过滤处理,去除掉了极端低质量数据,并将可用数据分成Low,Middle,Hight独立的三组,方便模型训练时进行数据配比组合,不同质量的数据分布如下所示,可以看到中文和英文的数据质量分布趋势基本一致,middle数据最多,其次是middle数据,low数据最少;另外,可以观察到,英文的hight数据相比中文有更高的占比(斜率更大),这也是符合当前不同语种分布的趋势的。 ![image-20240919112715282](./img/quality_ratio.png) ### 行业类目划分 为了提升数据集中行业划分对实际行业的覆盖,并对齐国家标准中定义的行业目录,我们参考国家统计局制定的国民经济行业分类体系和世界知识体系,进行类目的合并和整合,设计了覆盖中英文的最终的31个行业类目。类目表名称如下所示 ``` { "数学_统计": {"zh": "数学与统计", "en": "Math & Statistics"}, "体育": {"zh": "体育", "en": "Sports"}, "农林牧渔": {"zh": "农业与渔业", "en": "Agriculture & Fisheries"}, "房地产_建筑": {"zh": "房地产与建筑", "en": "Real Estate & Construction"}, "时政_政务_行政": {"zh": "政治与行政", "en": "Politics & Administration"}, "消防安全_食品安全": {"zh": "安全管理", "en": "Safety Management"}, "石油化工": {"zh": "石油化工", "en": "Petrochemicals"}, "计算机_通信": {"zh": "计算机与通信", "en": "Computing & Telecommunications"}, "交通运输": {"zh": "交通运输", "en": "Transportation"}, "其他": {"zh": "其他", "en": "Others"}, "医学_健康_心理_中医": {"zh": "健康与医学", "en": "Health & Medicine"}, "文学_情感": {"zh": "文学与情感", "en": "Literature & Emotions"}, "水利_海洋": {"zh": "水利与海洋", "en": "Water Resources & Marine"}, "游戏": {"zh": "游戏", "en": "Gaming"}, "科技_科学研究": {"zh": "科技与研究", "en": "Technology & Research"}, "采矿": {"zh": "采矿", "en": "Mining"}, "人工智能_机器学习": {"zh": "人工智能", "en": "Artificial Intelligence"}, "其他信息服务_信息安全": {"zh": "信息服务", "en": "Information Services"}, "学科教育_教育": {"zh": "学科教育", "en": "Subject Education"}, "新闻传媒": {"zh": "新闻传媒", "en": "Media & Journalism"}, "汽车": {"zh": "汽车", "en": "Automobiles"}, "生物医药": {"zh": "生物医药", "en": "Biopharmaceuticals"}, "航空航天": {"zh": "航空航天", "en": "Aerospace"}, "金融_经济": {"zh": "金融与经济", "en": "Finance & Economics"}, "住宿_餐饮_酒店": {"zh": "住宿与餐饮", "en": "Hospitality & Catering"}, "其他制造": {"zh": "制造业", "en": "Manufacturing"}, "影视_娱乐": {"zh": "影视与娱乐", "en": "Film & Entertainment"}, "旅游_地理": {"zh": "旅游与地理", "en": "Travel & Geography"}, "法律_司法": {"zh": "法律与司法", "en": "Law & Justice"}, "电力能源": {"zh": "电力与能源", "en": "Power & Energy"}, "计算机编程_代码": {"zh": "编程", "en": "Programming"}, } ``` - 行业分类模型的数据构造 - 数据构建 数据来源:预训练预训练语料抽样和开源文本分类数据,其中预训练语料占比90%,通过数据采样,保证中英文数据占比为1:1 标签构造:使用LLM模型对数据进行多次分类判定,筛选多次判定一致的数据作为训练数据 数据规模:36K 数据构造的整体流程如下: ![image-20240919140307205](./img/classify.png) - 模型训练: 参数更新:在预训练的bert模型上添加分类头进行文本分类模型训练 模型选型:考虑的模型性能和推理效率,我们选用了0.5b规模的模型,通过对比实验最终最终选择了bge-m3并全参数训练的方式,作为我们的基座模型 训练超参:全参数训练,max_length = 2048,lr=1e-5,batch_size=64,,验证集评估acc:86% ![image-20240919141408659](./img/classify_exp.png) ### 数据质量评估 - 为什么要筛选低质量的数据 下面是从数据中抽取的低质量数据,可以看到这种数据对模型的学习是有害无益的 ``` {"text": "\\_\\__\n\nTranslated from *Chinese Journal of Biochemistry and Molecular Biology*, 2007, 23(2): 154--159 \\[译自:中国生物化学与分子生物学报\\]\n"} {"text": "#ifndef _IMGBMP_H_\n#define _IMGBMP_H_\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nconst uint8_t bmp[]={\n\\/\\/-- 调入了一幅图像:D:\\我的文档\\My Pictures\\12864-555.bmp --*\\/\n\\/\\/-- 宽度x高度=128x64 --\n0x00,0x06,0x0A,0xFE,0x0A,0xC6,0x00,0xE0,0x00,0xF0,0x00,0xF8,0x00,0x00,0x00,0x00,\n0x00,0x00,0xFE,0x7D,0xBB,0xC7,0xEF,0xEF,0xEF,0xEF,0xEF,0xEF,0xEF,0xC7,0xBB,0x7D,\n0xFE,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x08,\n0x0C,0xFE,0xFE,0x0C,0x08,0x20,0x60,0xFE,0xFE,0x60,0x20,0x00,0x00,0x00,0x78,0x48,\n0xFE,0x82,0xBA,0xBA,0x82,0xBA,0xBA,0x82,0xBA,0xBA,0x82,0xBA,0xBA,0x82,0xFE,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x01,0x01,0x01,0x01,0x01,0x01,0x01,0x01,0x01,0x01,0x01,0x01,0x01,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFE,0xFF,\n0x03,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0xFF,0xFF,0x00,0x00,0xFE,0xFF,0x03,\n0x03,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0xFF,0xFE,0x00,0x00,0x00,0x00,0xC0,0xC0,\n0xC0,0x00,0x00,0x00,0x00,0xFE,0xFF,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0x03,\n0xFF,0xFE,0x00,0x00,0xFE,0xFF,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0x03,0xFF,\n0xFE,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFF,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xFF,0xFF,0x00,0x00,0xFF,0xFF,0x0C,\n0x0C,0x0C,0x0C,0x0C,0x0C,0x0C,0x0C,0x0C,0xFF,0xFF,0x00,0x00,0x00,0x00,0xE1,0xE1,\n0xE1,0x00,0x00,0x00,0x00,0xFF,0xFF,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0xFF,0xFF,0x00,0x00,0xFF,0xFF,0x0C,0x0C,0x0C,0x0C,0x0C,0x0C,0x0C,0x0C,0x0C,0xFF,\n0xFF,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x0F,0x1F,\n0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x1F,0x0F,0x00,0x00,0x0F,0x1F,0x18,\n0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x1F,0x0F,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x0F,0x1F,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,\n0x1F,0x0F,0x00,0x00,0x0F,0x1F,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x18,0x1F,\n0x0F,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0xE2,0x92,0x8A,0x86,0x00,0x00,0x7C,0x82,0x82,0x82,0x7C,\n0x00,0xFE,0x00,0x82,0x92,0xAA,0xC6,0x00,0x00,0xC0,0xC0,0x00,0x7C,0x82,0x82,0x82,\n0x7C,0x00,0x00,0x02,0x02,0x02,0xFE,0x00,0x00,0xC0,0xC0,0x00,0x7C,0x82,0x82,0x82,\n0x7C,0x00,0x00,0xFE,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x24,0xA4,0x2E,0x24,0xE4,0x24,0x2E,0xA4,0x24,0x00,0x00,0x00,0xF8,0x4A,0x4C,\n0x48,0xF8,0x48,0x4C,0x4A,0xF8,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xC0,0x20,0x10,0x10,\n0x10,0x10,0x20,0xC0,0x00,0x00,0xC0,0x20,0x10,0x10,0x10,0x10,0x20,0xC0,0x00,0x00,\n0x00,0x12,0x0A,0x07,0x02,0x7F,0x02,0x07,0x0A,0x12,0x00,0x00,0x00,0x0B,0x0A,0x0A,\n0x0A,0x7F,0x0A,0x0A,0x0A,0x0B,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,\n0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x1F,0x20,0x40,0x40,\n0x40,0x50,0x20,0x5F,0x80,0x00,0x1F,0x20,0x40,0x40,0x40,0x50,0x20,0x5F,0x80,0x00,\n}; \n\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif \\/\\/ _IMGBMP_H_ _SSD1306_16BIT_H_\n"} ``` - 数据构建 数据来源:随机采样预训练语料 标签构建:设计数据打分细则,借助LLM模型进行多轮打分,筛选多轮打分分差小于2的数据 数据规模:20k打分数据,中英文比例1:1 数据打分prompt ``` quality_prompt = """Below is an extract from a web page. Evaluate whether the page has a high natural language value and could be useful in an naturanl language task to train a good language model using the additive 5-point scoring system described below. Points are accumulated based on the satisfaction of each criterion: - Zero score if the content contains only some meaningless content or private content, such as some random code, http url or copyright information, personally identifiable information, binary encoding of images. - Add 1 point if the extract provides some basic information, even if it includes some useless contents like advertisements and promotional material. - Add another point if the extract is written in good style, semantically fluent, and free of repetitive content and grammatical errors. - Award a third point tf the extract has relatively complete semantic content, and is written in a good and fluent style, the entire content expresses something related to the same topic, rather than a patchwork of several unrelated items. - A fourth point is awarded if the extract has obvious educational or literary value, or provides a meaningful point or content, contributes to the learning of the topic, and is written in a clear and consistent style. It may be similar to a chapter in a textbook or tutorial, providing a lot of educational content, including exercises and solutions, with little to no superfluous information. The content is coherent and focused, which is valuable for structured learning. - A fifth point is awarded if the extract has outstanding educational value or is of very high information density, provides very high value and meaningful content, does not contain useless information, and is well suited for teaching or knowledge transfer. It contains detailed reasoning, has an easy-to-follow writing style, and can provide deep and thorough insights. The extract: <{EXAMPLE}>. After examining the extract: - Briefly justify your total score, up to 50 words. - Conclude with the score using the format: "Quality score: " ... """ ``` - 模型训练 模型选型:与分类模型类似,我们同样使用的是0.5b规模的模型,并对比试验了beg-m3和qwen-0.5b,最终实验显示bge-m3综合表现最优 模型超参:基座bge-m3,全参数训练,lr=1e-5,batch_size=64, max_length = 2048 模型评估:在验证集上模型与GPT4对样本质量判定一致率为90% ![image-20240919142248242](./img/quality-exp.png) - 高质量数据带来的训练收益 我们为了验证高质量的数据是否能带来更高效的训练效率,在同一基座模型下,使用从未筛选之前的50b数据中抽取出高质量数据,可以认为两个数据的分布大体是一致的,进行自回归训练. 曲线中可以看到,经过高质量数据训练的模型14B的tokens可以达到普通数据50B的模型表现,高质量的数据可以极大的提升训练效率。 ![image-20240919142732476](./img/quality_train.png) 此外,高质量的数据可以作为预训练的退火阶段的数据加入到模型中,进一步拉升模型效果,为了验证这个猜测,我们在训练行业模型时候,在模型的退火阶段加入了筛选之后高质量数据和部分指令数据转成的预训练数据,可以看到极大提高了模型的表现。 ![cpt_two_stage](./img/cpt_two_stage.png) 最后,高质量的预训练语料中包含着丰富的高价值知识性内容,可以从中提取出指令数据进一步提升指令数据的丰富度和知识性,这也催发了[Industry-Instruction](https://huggingface.co/datasets/BAAI/Industry-Instruction)项目的诞生,我们会在那里进行详细的说明。