doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.20499 | 39 | Algorithm 1 SpyGPT: Interactive Multi-Agent Framwork Require: Keyword pair {i, j}, number of all agents N and guest agent X Ensure: Final winning team t 1: procedure SPYGPT({i, j}, N , X) 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: Wspy â Random Selection(i, j) Wvillager â Select w â {i, j} â© w ̸= Wspy Hspy â [Wspy, N ]; Hvillager â [Wvillager, N ] X â [Hspy]; Y1, · · · , YN â1 â [Hvillager] P â [X, Y1, · · · , YN â1]; Nsurvive â N while Nsurvive > 2 do for each Pi in P do s â Pi(H) H â H + [s] â· Initialize spy teamâs keyword â· Initialize villager teamâs keyword â· Initialize game history â· Initialize agents â· Record all agents â· Speaking phase â· Generate desciptions â· Append s to H â· Initialize | 2310.20499#39 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.20499 | 40 | keyword â· Initialize game history â· Initialize agents â· Record all agents â· Speaking phase â· Generate desciptions â· Append s to H â· Initialize number of votes â· Voting phase â· Generate voted agent â· Append v to H â· Append v to V â· Select the voted agent V â [] for each Pi in P do v â Pi(H) H â H + [v] V â V + [v] Pvoted â M ax p â V if Pvoted = X then â· Spy agent out, game over break else P â P â Pvoted; Nsurvive â Nsurvive â 1 â· Villager agent out, game continue return t | 2310.20499#40 | Leveraging Word Guessing Games to Assess the Intelligence of Large Language Models | The automatic evaluation of LLM-based agent intelligence is critical in
developing advanced LLM-based agents. Although considerable effort has been
devoted to developing human-annotated evaluation datasets, such as AlpacaEval,
existing techniques are costly, time-consuming, and lack adaptability. In this
paper, inspired by the popular language game ``Who is Spy'', we propose to use
the word guessing game to assess the intelligence performance of LLMs. Given a
word, the LLM is asked to describe the word and determine its identity (spy or
not) based on its and other players' descriptions. Ideally, an advanced agent
should possess the ability to accurately describe a given word using an
aggressive description while concurrently maximizing confusion in the
conservative description, enhancing its participation in the game. To this end,
we first develop DEEP to evaluate LLMs' expression and disguising abilities.
DEEP requires LLM to describe a word in aggressive and conservative modes. We
then introduce SpyGame, an interactive multi-agent framework designed to assess
LLMs' intelligence through participation in a competitive language-based board
game. Incorporating multi-agent interaction, SpyGame requires the target LLM to
possess linguistic skills and strategic thinking, providing a more
comprehensive evaluation of LLMs' human-like cognitive abilities and
adaptability in complex communication situations. The proposed evaluation
framework is very easy to implement. We collected words from multiple sources,
domains, and languages and used the proposed evaluation framework to conduct
experiments. Extensive experiments demonstrate that the proposed DEEP and
SpyGame effectively evaluate the capabilities of various LLMs, capturing their
ability to adapt to novel situations and engage in strategic communication. | http://arxiv.org/pdf/2310.20499 | Tian Liang, Zhiwei He, Jen-tse Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, Xing Wang | cs.CL | Work in progress | null | cs.CL | 20231031 | 20231106 | [] |
2310.19341 | 0 | 3 2 0 2
t c O 0 3 ] L C . s c [
1 v 1 4 3 9 1 . 0 1 3 2 : v i X r a
# Skywork: A More Open Bilingual Foundation Model
Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang Shuicheng Yan, Han Fang, Yahui Zhouâ
Skywork Team, Kunlun Inc.
# Abstract | 2310.19341#0 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 1 | Skywork Team, Kunlun Inc.
# Abstract
In this report, we present Skywork-13B, a family of large language models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both English and Chinese texts. This bilingual founda- tion model is the most extensively trained and openly published LLMs of comparable size to date. We introduce a two-stage train- ing methodology using a segmented corpus, targeting general purpose training and then domain-specific enhancement training, re- spectively. We show that our model not only excels on popular benchmarks, but also achieves state of the art performance in Chinese language modeling on diverse domains. Furthermore, we propose a novel leakage detection method, demonstrating that data contamination is a pressing is- sue warranting further investigation by the LLM community. To spur future research, we release Skywork-13B along with check- points obtained during intermediate stages of the training process. We are also releas- ing part of our SkyPile corpus, a collection of over 150 billion tokens of web text, which is the largest high quality open Chinese pre- training corpus to date. We hope Skywork- 13B and our open corpus will serve as a valuable open-source resource to democra- tize access to high-quality LLMs. | 2310.19341#1 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 2 | generating, and translating human language with an unprecedented degree of accuracy and sophistication. However, the proliferation of these models has also been accompanied by a growing trend towards commercialization and a lack of transparency, a phenomenon that is increasingly influencing the dynamics of the open-source community.
Historically, the open-source community has thrived on the principles of collaboration, trans- parency, and unrestricted sharing of ideas. However, as the commercial potential of LLMs has been recognized, this openness has begun to diminish. The reality is that many organi- zations only make model checkpoints publicly accessible, while withholding vital information on model reproduction. This practice signifi- cantly hampers the progress of the field.
In an effort to revive the spirit of the open- source community and contribute to the on- going dialogue about transparency in AI, we present Skywork-13B: a family of bilingual large language models with 13 billion parameters, trained on a colossal corpus of more than 3.2 trillion tokens drawn from both English and Chinese texts. To our knowledge, our Skywork- 13B is the most thoroughly trained family of open LLMs of comparable size to date.
1
# Introduction | 2310.19341#2 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 3 | 1
# Introduction
Natural Language Processing (NLP), a vital branch of artificial intelligence, has experienced a transformative surge in recent years. Pivotal to this revolution has been the advent and ad- vancement of large language models (LLMs) (Ouyang et al., 2022; OpenAI, 2023; Bubeck et al., 2023; Chowdhery et al., 2022; Anil et al., 2023; Touvron et al., 2023a,b). These complex computational structures, composed of billions of parameters, are capable of understanding,
In this technical report, we offer a compre- hensive disclosure of the Skywork-13B devel- opmental journey. We detail the composition of our training data, provide insights into the evolutionary trajectory of the modelâs abilities during training, and share methodologies that could be employed to enhance model ability in specific domains. We believe that such an open approach not only aids in the reproducibility of our work but also provides a valuable re- source for other researchers seeking to explore and expand the capabilities of large language models. This technical report is also a call to
â Email: {forename}.{surname}@kunlun-inc.com
1 | 2310.19341#3 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 4 | â Email: {forename}.{surname}@kunlun-inc.com
1
action for renewed transparency in the field of NLP. Through it, we hope to inspire a return to a more collaborative, open-source community, where progress is not hampered by commer- cial considerations but propelled by collective intelligence and shared wisdom.
Our contributions are the following:
⢠We release Skywork-13B1, a family of LLMs that is the most extensively trained and openly published LLMs of comparable size to date. Our Skywork-13B family includes 1) Skywork-13B-Base, a strong foundation model with state of the art Chinese language modeling capability, and 2) Skywork-13B- Chat, a fined-tuned version optimized for conversation2.
⢠We disclose detailed information on the training process and data composition. We also release intermediate checkpoints, which provide a valuable resource for understand- ing how the modelâs capabilities develop over the course of training. It enables other re- searchers to leverage these checkpoints for their specific use-cases.
⢠We release a portion of our high quality training corpus, totaling more than 150 bil- lion tokens. To our knowledge, this is the largest open Chinese corpus for language model pre-training to date. | 2310.19341#4 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 5 | ⢠We develop a novel method that detects the level of in-domain data usage during the training stage. To facilitate reproduction of the experiments presented in this report, we have released the relevant data.
# 2 Methodology
2.1 Two Pre-training Stages In order to train Skywork-13B, we constructed SkyPile (see Section 3.1), a massive training corpus primarily constituted by publicly acces- sible web pages. We identified a small subset of SkyPile, encompassing exercises and solu- tions that span a broad spectrum of subjects from primary to graduate school. This includes
1Github repository: https://github.com/ SkyworkAI/Skywork.
2In this technical report we focus on the development of the base model. Details on Skywork-13B-Chat can be found in our Github repository.
2 | 2310.19341#5 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 6 | 2In this technical report we focus on the development of the base model. Details on Skywork-13B-Chat can be found in our Github repository.
2
coding problems, national exam questions, text- book exercises, and others. Given the majority of these exercises are STEM-related, we hence- forth refer to this subset and its complement as SkyPile-STEM and SkyPile-Main, respectively. Rather than training the Skywork-13B foun- dation model directly on SkyPile as a whole, we adopted a two-stage training approach. The first stage, which constitutes the primary pre- involves training the model training phase, from scratch on SkyPile-Main. In the sec- ond stage, our Skywork-13B is enriched with STEM-related domain knowledge and problem- solving skills through continual pre-training on SkyPile-STEM. To circumvent the potential issue of catastrophic forgetting, this continual pre-training is performed on a mix of SkyPile- STEM and SkyPile-Main, rather than exclu- sively on SkyPile-STEM. | 2310.19341#6 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 7 | The decision to segregate Stage-1 and Stage- 2 pre-training serves a dual purpose. Firstly, we acknowledge that a significant proportion of the samples from SkyPile-STEM are, by their nature, supervised data. Those data are closely related to popular benchmarks such as CEVAL (Huang et al., 2023), MMLU (Hendrycks et al., 2021) and GSM8K (Cobbe et al., 2021), and can be utilized in a supervised fine-tuning (SFT) process to directly enhance model performance on related downstream tasks. In this context, the separation between Stage-1 and Stage-2 training enables us to more effectively assess the impacts of general-purpose pre-training (on web texts) and targeted pre-training (on in- domain/supervised data). Such insights could inform future data collection and compilation strategies for foundational model training.
Secondly, by restricting first stage pre- training to general-purpose data, we are able to produce a version of foundation model as an alternative to the one with targeted enhance- ment. While the latter demonstrates superior performance on certain downstream tasks, it is less capable in language modeling of natural texts. We posit that this alternative is a valu- able contribution to the community, given its potential to excel in applications that do not require STEM-related competencies. | 2310.19341#7 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 9 | The main issue of monitoring training loss lies in that its effectiveness comes into question when considering the potential of overfitting. The training loss is equivalent to validation loss only if the training data is utilized exactly once (i.e., in one epoch). Yet, in practical scenarios of training LLMs, high-quality data often go through the training process multi- ple times (Taylor et al., 2022; Touvron et al., 2023a; Rozière et al., 2023; Gunasekar et al., 2023; Li et al., 2023b). Besides, even after ex- plicit de-duplication, there may still exist signif- icant amount of duplicated data in the training set (Soboleva et al., 2023; Abbas et al., 2023). In either cases, solely relying on training loss can lead to overlooking the issue of overfitting, thereby producing overly optimistic estimates of model performance. The top left subplot in Figure 3 illustrates the trajectory of the pre-training loss for our Skywork-13B model. Consistent with findings reported in (Touvron et al., 2023a,b; Baichuan Inc., 2023), the loss demonstrates a steady decline throughout the training process. However, an observation | 2310.19341#9 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 11 | Benchmarking based on intermediate check- points is another common monitoring approach (Touvron et al., 2023a; Baichuan Inc., 2023). Nevertheless, it presents several challenges. Firstly, there is a high variance in benchmark results, which can lead to unstable and unreli- able assessments of training progress. Secondly, benchmark results are not sensitive to minor progress in training. This insensitivity makes it difficult to accurately track gradual improve- ments during the training process. Besides, weaker models do not follow instructions well. Hence benchmark results may not accurately reflect their true learning progress or poten- tial. Finally, an inconvenience posed by most benchmarks is the necessity for model genera- tion. This process is notably resource-intensive, demanding substantial computational power.
# During the pre-training of Skywork-13B, we
3
# Validation Loss vs. Average Task Metric
60- 55- 50- 45 - Average Task Metric 40- 35 - 2 28 23 22 23 28 19 18 Validation Loss
Figure 1: Validation loss on English web texts vs. average task metric during the pre-training of Skywork-13B. The tasks include BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2019), Winogrande (Sakaguchi et al., 2021), TriviaQA (Joshi et al., 2017) and RACE (Lai et al., 2017). | 2310.19341#11 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 12 | embrace the method of monitoring the language modeling loss across numerous reserved valida- tion sets, each reflecting a distinct data dis- tribution. More specifically, we have created separate validation sets for code, academic pub- lications, social media posts, web texts in Chi- nese and English, among others. Conventional monitoring metrics are also utilized, but they serve merely as supplementary tools. In Figure 1 we plot the curve of language model vali- dation loss on English web texts against the average metric of several English downstream tasks. It is apparent that there is a very high correlation between the two quantities, showing that validation loss can serve as a valid proxy metric for downstream task performance. In the context of LLM pre-training, this approach also yields several other benefits:
⢠Ease of construction: Crafting multiple val- idation sets is a relatively effortless task. This enables the evaluation of a modelâs lan- guage modeling performance across varied domains.
⢠Simplicity in computation: Calculation of validation loss is straightforward, signifi- cantly reducing the computational and lo- gistical overhead associated with tracking model training.
⢠High sensitivity to training progress: Valida- tion loss is finely attuned to the progression of training, thereby offering a more detailed
perspective on how models evolve and im- prove over time. | 2310.19341#12 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 13 | perspective on how models evolve and im- prove over time.
⢠Model-agnosticism: Validation loss is indif- ferent to the composition of the training corpus or the model architecture. It allows for comparison not only between different checkpoints produced within a single train- ing session, but also across varied models from the community. This ensures a consis- tent and equitable basis for model compari- son.
Note that monitoring the validation loss on a held-out set sharing the same distribution as the training set is a ubiquitous practice in machine learning. However, the observation of validation loss across multiple held-out sets, each with deliberate, unique distributions, is not common. We also note that the perspective asserting the primacy of language modeling loss as the paramount performance metric for models is not a recent revelation. This principle has been either explicitly or implicitly adopted in a number of research studies, as exemplified in (Kaplan et al., 2020; Hoffmann et al., 2022; Anil et al., 2023; Xia et al., 2023; Delétang et al., 2023).
# 3 Pre-training
3.1 SkyPile Corpus In order to train Skywork-13B, we build SkyP- ile, a vast, high quality corpus comprising more than 6 trillion tokens. A segment of the corpus, comprising over 150 billion tokens of web text, has been open sourced to facilitate research and training on Chinese LLMs3. | 2310.19341#13 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 14 | Our SkyPile is an amalgamation of several sources, the overwhelming majority of which is gleaned from publicly accessible channels. Numerous prior research works, exemplified by initiatives such as LLaMA (Touvron et al., 2023a) and RefinedWeb (Penedo et al., 2023), have substantiated the notion that publicly ac- cessible web data can yield exceptionally high- quality LLMs. In alignment with this empirical evidence, we subscribe to the premise of leverag- ing publicly accessible webpages as our primary source for training data.
3huggingface.co/datasets/Skywork/ SkyPile-150B
4
The construction of SkyPile is characterized by a dedicated emphasis on two primary dimen- sions: text quality and information distribution. Our data processing pipeline, inspired by (Wen- zek et al., 2020; Touvron et al., 2023a; Penedo et al., 2023), incorporates the following stages: | 2310.19341#14 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 15 | ⢠Structural Extraction: Due to the pre- dominant source of our dataset being pub- licly accessible web pages, the objective of the first stage is the extraction of pertinent content while concurrently expunging extra- neous textual elements that are deemed non- contributory to the training of our language model, e.g. these superfluous components in- clude navigational bars, site-specific contact information, disjunctive title texts devoid of substantive content, etc. Subsequent to this culling process, the retained informa- tion predominantly consists of contiguous, medium to long-form textual passages.
In the pursuit of cultivating a profoundly adept LLM, the modelâs exposure must encompass a diverse array of content spanning an extensive spec- trum of domains. Prior endeavors within the field have entailed the task of assigning cat- egorical labels to each individual document or webpage, thereby manually dictating the composition of the training corpus. How- ever, we posit that the corpus employed for LLM training has burgeoned to such an ex- tent that the knowledge it encapsulates can not be compartmentalized discretely. Conse- quently, eschewing a label-centric approach, our methodology centers on benchmarking the semantic affinities existing between tex- tual segments, thereby identifying and omit- ting those text blocks characterized by an exceedingly high recurrence rate. | 2310.19341#15 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 16 | Deduplication has demonstrated its remarkable efficacy in en- hancing the overall quality of a training cor- pus, and it has found extensive application in virtually all prominent datasets (Hernan- dez et al., 2022; Kandpal et al., 2022; Abbas et al., 2023; Lee et al., 2022). Within the framework of SkyPile, we regard deduplica- tion as an integral component of the Distri- bution Filtering process. When considering the broader perspective, it becomes evident
that duplication constitutes a paramount factor influencing the semantic distribution of a corpus. Consequently, the techniques and strategies we employed during the dis- tribution filtering phase autonomously elim- inated a substantial portion of duplicated content.
In this phase, we deploy the CCNet (Wenzek et al., 2020) pipeline to perform two critical filtration tasks: the elimination of content of inferior quality and the exclusion of pages that are neither in English nor Chinese. We trained a binary classifier that predicts the likelihood that a given webpage is suitable for inclu- sion as a reference within the Wikipedia cor- pus. The outcome of this stage is organized into distinct quality-based categories, and we retain exclusively the high quality groups, opting to discard the remaining groups in its entirety.
Quality Filtering: | 2310.19341#16 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 17 | Quality Filtering:
Above we described our pre-processing pipeline for natural text. As for Github content, we em- ploy an approach that is similar to (Together Computer, 2023). We have devised a collection of straightforward yet efficacious heuristics, en- compassing criteria such as line length filtration and alphanumeric thresholds, designed to dis- cern and exclude content of low quality. Our cri- teria are specifically oriented toward enhancing content quality, as opposed to merely curbing its volume. Notably, in contrast to prevailing practices that involve the wholesale removal of a significant portion of json, xml, yaml, and html content, we have made a deliberate choice to retain a judiciously proportionate represen- tation of these data formats.
Note that in pursuit of harmonizing the modelâs proficiency in both English and Chi- nese, we include in SkyPile a curated high- quality parallel corpora. This data is meticu- lously structured to pair a complete English paragraph with its corresponding Chinese coun- terpart, ensuring a seamless alignment of lin- guistic capabilities between the two languages.
3.2 Training Data Composition Our Skywork-13B is pre-trained for 3.2 trillion tokens, sampled from SkyPile. Texts from cer- tain sources are deemed as of high quality, e.g.
5 | 2310.19341#17 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 18 | 5
Category Percentage English Webpages Books Academic Papers Encyclopedia Miscellany 39.8% 3.6% 3.0% 0.5% 2.9% Chinese Webpages Social Media Encyclopedia Miscellany 30.4% 5.5% 0.8% 3.1% Other Lang. Encyclopedia 2.4% Code Github 8.0%
Table 1: Breakdown of training data in Stage-1 pre-training of Skywork-13B.
Wikipedia, hence have undergone upsampling. However, we generally stick to the rule that the number of repetition does not exceed five, as is recommended by recent studies (Taylor et al., 2022; Muennighoff et al., 2023).
We report in Table 1 a breakdown of the constituent components of the training tokens during Stage-1 pre-training. The training to- kens are primarily composed of English and Chinese texts, constituting 49.8% and 39.6% of the data, respectively. Code contributes 8.0% to the total, with texts in other languages ac- counting for the remaining 2.4%. The category labeled as âmiscellanyâ encompasses a diverse range of texts, including but not limited to, le- gal articles, court documents, company annual reports, and classical literature.
# 3.3 Tokenizer | 2310.19341#18 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 19 | # 3.3 Tokenizer
We tokenize the data using byte-pair encoding (BPE) as implemented in SentencePiece (Kudo and Richardson, 2018), following the approach of LLaMA (Touvron et al., 2023a). Since our model is intended to be English-Chinese bilin- gual, we extend the original vocabulary of LLaMA, which primarily consists of latin-based words and subwords, with frequently used Chi- nese characters and words. Specifically, we add 8000 single-character tokens from BERTâs vocabulary (Devlin et al., 2019) to LLaMAâs vocabulary. We further expand the vocabu- lary with 25k frequent Chinese multi-character words. This results in a total vocabulary size of 65,536 tokens, of which 17 are reserved as
# special symbols.
As in LLaMA, we split all numbers into indi- vidual digits, and fall back to bytes to decom- pose unknown UTF-8 characters.
Category Size Latin based words & subwords Chinese characters & Unicode symbols Chinese words Reserved symbols 32000 8000 25519 17 Total 65536
Table 2: Breakdown of the vocabulary used in Skywork-13B. | 2310.19341#19 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 20 | Table 2: Breakdown of the vocabulary used in Skywork-13B.
3.4 Architecture Our Skywork-13B is based on the transformer architecture (Vaswani et al., 2017), consisting of stacks of transformer-decoder layers. In con- trast to the original transformer model, we have incorporated several modifications, inspired by LLaMA (Touvron et al., 2023a,b). Our pre- liminary experiments, as illustrated in Figure 2, validate these changes, demonstrating the improved performance they confer. Details on this experiment can be found in Appendix A. While our network architecture takes after the LLaMA model to a great extent, there ex- ists a notable difference in our preference for a deeper, yet narrower, network. A comparative exploration of the Skywork-13B and LLaMA2- 13B network configurations is presented in Ta- ble 3.
The specific modifications made are de- scribed in detail below.
⢠Positional Embedding: We use Rotary Positional Embedding (RoPE) (Su et al., 2022), that was motivated by its extensive adoption in various prominent large lan- guage models, such as LLaMA and PaLM, as well as its demonstrated effectiveness in extending the length of context windows, as evidenced by recent studies (Chen et al., 2023; Rozière et al., 2023; Xiong et al., 2023). | 2310.19341#20 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 21 | ⢠Layer Normalization: We replaced the conventional layer normalization with RM- SNorm (Zhang and Sennrich, 2019). Addi- tionally, we adopted pre-normalization in each layer instead of post-normalization, which has been shown to enhance the train- ing stability of transformer models.
6
2.4 - â GPT-7B â LLaMA-7B 2.3 - 2.2 - 2.1- 2.0 - 1.9 - Training Loss 1.8 - 1.7 - 16-1 1 1 1 1 i} 50 100 150 200 Tokens (B)
Figure 2: Preliminary Experiments: Comparison of conventional GPT architecture and more recent LLaMA architecture. For each of the two trans- former variants, a model with 7 billion parameters is trained from Scratch on 200 Billion Tokens. The plot clearly shows that the LLaMA architecture achieves a lower training loss than GPT, demon- strating the formerâs superiority.
⢠Activation: We employed the SwiGLU acti- vation function (Shazeer, 2020). In line with established conventions in prior studies, we reduced the dimension of the feed-forward network (FFN) from four times the hidden size to eight-thirds of the hidden size. This adjustment was made to maintain parity be- tween the total parameters in a layer and those in the vanilla transformer layer. | 2310.19341#21 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 22 | LLaMA2-13B Skywork-13B Vocab. Size Hidden Dim. FFN Dim. Head Dim. Num. Heads Num. Layers 32,000 5,120 13,696 128 40 40 65,536 4,608 12,288 128 36 52 Seq. Len. #Tokens per Batch Peak LR Minimum LR 4,096 4M 3e-4 3e-5 4,096 16M 6e-4 6e-5
Table 3: Comparisons in architecture and important hyper-parameters of Skywork-13B and LLaMA2- 13B.
3.5 Infrastructure Our Skywork-13B is trained on a cluster of 64 NVIDIA-HGX-A800 nodes, a total of 512 A800- 80G SXM GPUs. Each node in the cluster is outfitted with high-speed 400GB/s NVLinks
for intra-node communication and an 800Gb/s RoCE network for inter-node connectivity. Our training framework is based on Megatron-LM (Shoeybi et al., 2020) library, designed to sup- port the stable, prolonged training of large-scale models, accommodating thousands of GPUs and model sizes in the order of hundreds of billions parameters. | 2310.19341#22 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 23 | Considering the relatively moderate size of our Skywork-13B model, we have avoided the use of GPU memory optimization tech- niques and parallel schemes that could impede speed. These include Tensor Model Paral- lelism (Shoeybi et al., 2020), Sequence Paral- lelism (Korthikanti et al., 2022), ZeRO-Stage2 (Rajbhandari et al., 2020), and Checkpointing (Chen et al., 2016). Instead, we have lever- aged Data Parallelism (DP) with ZeRO-1 (Ra- jbhandari et al., 2020) and Pipeline Parallelism (PP) (Narayanan et al., 2021) as the primary parallelization strategies for training Skywork- 13B. ZeRO-1 substantially diminishes the GPU memory footprint of the Adam optimizer state without increasing the burden on intercommu- nication. Pipeline Parallelism offers memory optimization at a minimal communication over- head, which decreases as the gradient accumu- lation step increases, thereby mitigating the slowdown of all-reduce as DP Size increases. Regarding operator optimization, we adopted Flash Attention V2 (Dao et al., 2022; Dao, 2023), a strategy that both optimizes GPU memory and expedites the training process. | 2310.19341#23 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 24 | Upon extensive preliminary experiments, we have decided to adopt the combination of DP256, PP2, and ZeRO-1 as our distributed training strategy for Skywork-13B. With this configuration, we achieved a token throughput of 1873 per GPU per second and a model flops utilization (MFU) of 56.5%. An overview of these experiments is provided in Appendix B. The training process of Skywork-13B spanned a total of 39 days.
3.6 Training Details As outlined in Section 2.1, the pre-training of Skywork-13B is executed in two stages:
⢠Stage-1: General purpose pre-training on SkyPile-Main.
⢠Stage-2: STEM-oriented continual pre- training on SkyPile-STEM.
7
In both stages, the model is trained using the standard auto-regressive language modeling ob- jective, with context lengths fixed at 4096 to- kens. The AdamW optimizer (Loshchilov and Hutter, 2019), applied for the training process, uses β1 and β2 values of 0.9 and 0.95, respec- tively. Throughout the pre-traning, we applied a weight decay of 0.1 and gradient clipping of 1.0. Our model was trained with bfloat16 mixed precision. | 2310.19341#24 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 25 | 3.6.1 Stage-1 Pre-training In the first stage, our Skywork-13B model is trained from scratch on SkyPile-Main for over three trillion tokens. This stage consists of two sequential training sessions, covering the first 0 â¼ 2T tokens and the subsequent 2 â¼ 3T tokens, respectively.
Our initial plan was to train Skywork-13B for two trillion tokens. We launched a train- ing session accordingly, with a cosine learn- ing rate schedule that gradually decays from a peak learning rate of 6eâ4 to a final learn- ing rate of 6eâ5. In Figure. 3, we report in red curves the evolution of language mod- eling losses and several benchmark results of our Skywork-13B during this session. It is evi- dent that by the end of this session, the model had not reached saturation. We hypothesized that the model could further benefit from ad- ditional pre-training, prompting us to launch a secondary training session targeting an addi- tional one trillion tokens. | 2310.19341#25 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 26 | The second training session utilized a slightly different composition of training data compared to the initial 0 â¼ 2T session, as data from certain sources had been depleted and fresh sources were introduced. Owing to the shift in the training distribution, we meticulously tuned the learning rate parameter, eventually deciding on a constant learning rate of 6e-5 for the 2 â¼ 3T session. In Figure. 4, we illus- trate the model losses under varying learning rate conditions. Results indicate that a higher learning rate leads to escalations in training loss which we deem too costly to reverse. The im- pact of the second training session is depicted in blue curves of Fig. 3. The enhancement in the modelâs performance continues, albeit at a decelerating pace. Interestingly, although our Skywork-13B trails in the realm of English language modeling, it significantly surpasses all | 2310.19341#26 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 27 | 52 Training loss 33 Val. Loss on English Texts 33 Val. Loss on Chinese Texts --- LLaMA-13B --- Xverse-13B 21- 2.2- â-- LLaMA2-13B === Baichuan-13B 21 =~ Xverse-13B 2.2- === Baichuan2-13B 2.0 - . --- Baichuan-13B --- Qwen-14B 2.0- ~~~ Baichuan2-13B InternLM-20B wy 19> === Qwen-14B 6 1.9- IntemnLM-20B NN ~ 1.8- 1.8- 17- L 1.6 - 1.6- 1.54 ' 1 1 155 1 ' 1 1.8- 1 ' ' 0 1000 2000 3000 0 1000 2000 3000 0 1000 2000 3000 CEVAL MMLU GSM8K 50 - 25- --- random --- random --- random 45 20- 40 - 15- > G £ 5 35 10- o g < 30- 5- 25 - Q 5-52 = 2 == === === === === 20> 1 i i 20> i 1 i -54 i 1 1 0 1000 2000 3000 0 1000 2000 3000 0 1000 2000 3000 Tokens (B) Tokens (B) Tokens (B) | 2310.19341#27 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 28 | Figure 3: Trajectory of important monitoring metrics during Stage-1 pre-training. Top Left: Training loss. Top Middle and Right: Validation loss on English and Chinese held-out sets of web texts. The horizontal dashed lines in the middle and right plots correspond to the evaluated language modeling loss for several similar-sized open LLMs. Bottom: Benchmark results on CEVAL, MMLU and GSM8K respectively. Stage-1 pre-training consists of two sequential training sessions, represented by different colors in the loss curves (red for session 0 â¼ 2T and blue for session 2 â¼ 3T).
other comparable open LLMs in Chinese lan- guage modeling. In Section 4.3, we will confirm that the superiority of our Skywork-13B in Chi- nese language modeling is not only true on our validation set, it also holds true on a number of test sets sourced from diverse domains.
More results can be found in Appendix (see
Figure 6).
to meticulously calibrate the sampling ratio between the different data sources. Initial ex- periments revealed that a gradual increment in the SkyPile-STEM ratio yielded the most effec- tive results. Therefore, for the actual Stage-2 pre-training phase, we implemented a sampling plan that commenced with 10% of SkyPile- STEM initially, gradually escalating to a peak of 40% towards the conclusion of the training. | 2310.19341#28 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 29 | 3.6.2 Stage-2 Pre-training The primary aim of Stage-2 pre-training is to augment the model with capabilities pertinent to STEM disciplines. The data utilized in this stage comprises an approximate 20% from SkyPile-STEM and 80% from SkyPile-Main, amassing a total of roughly 130 billion tokens. A constant learning rate of 6eâ5 is adopted, maintaining parity with the terminal learning rate used in Stage-1 pre-training
This training strategy proved successful in maintaining the stability of the modelâs lan- guage modeling validation loss while enabling an optimum transfer of STEM knowledge. The extended training period ensures a comprehen- sive assimilation of STEM-related knowledge into the model without causing significant dis- turbance to the pre-existing learned informa- tion.
Consequent to the data distribution shift from Stage-1 to Stage-2, it becomes crucial
The impact of Stage-2 pre-training is illus- trated in Figure 5, which presents the progres8
LR for Continual Pre-training
â LR=6e-5 1.74- ââ LR=1.2e-4 â LR=2.5e-4 172+ 1.70- Training Loss | 1.66 - 1900 1920 1940 1960 1980 2000 2020 2040 Tokens (B) | 2310.19341#29 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 30 | Figure 4: Test runs for tuning the learning rate of the 2 â¼ 3T training session. It can be seen that 6e- 5, which is the terminal learning rate from 0 â¼ 2T training session, yields the best result.
sion of the CEVAL benchmark score. The evo- lution of scores on other STEM-related bench- marks, such as GSM8K, mirrors a similar trend. Improvements in individual subjects of the CE- VAL can be found in Table 12 (see appendix).
Stage-2 CEVAL
Accuracy PS â ul u ul wu wn oOo NOON Oo u Oo u Oo u 25 50 75 100 125 Tokens (B) oFigure 5: Evolution of CEVAL score during Stage-2 pre-training.
# 4 Evaluation
4.1 Baselines We compare the performance of our Skywork- 13B with open models simi- including LLaMA-13B (Tou- lar vron et al., 2023a), LLaMA2-13B (Touvron et al., 2023b), Baichuan-13B, Baichuan2-13B (Baichuan Inc., 2023), Xverse-13B (Xverse-AI, 2023), IntermLM-20B (InternLM Team, 2023). A summary of these models can be found in Table 4.
9 | 2310.19341#30 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 31 | 9
Model #Tokens Language OpenLLaMA-13B LLaMA-13B LLaMA2-13B Baichuan-13B Baichuan2-13B Xverse-13B InternLM-20B 1.0T 1.0T 2.0T 1.4T 2.6T 1.4T 2.3T English English English English & Chinese English & Chinese English & Chinese English & Chinese Skywork-13B 3.2T English & Chinese
Table 4: Details of various models. The column la- beled "#Tokens" indicates the quantity of training tokens used by each model, whereas the "Language" column specifies the primary languages supported by each model.
4.2 Benchmark Evaluation We focus on the following popular benchmarks:
⢠MMLU (Hendrycks et al., 2021): MMLU is a benchmark designed to measure knowledge acquired during pre-training. The bench- mark covers 57 subjects across STEM, the humanities, the social sciences, and more, ranging in difficulty from an elementary level to an advanced professional level. It tests both world knowledge and problem solving ability. | 2310.19341#31 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 32 | ⢠CEVAL (Huang et al., 2023) and CMMLU (Li et al., 2023a): Those are Chinese bench- marks that mimick MMLU. CEVAL consists of 13948 multi-choice questions spanning 52 diverse disciplines and four difficulty lev- els. CMMLU covers 67 disciplines that span from elementary to advanced professional levels.
⢠GSM8K (Cobbe et al., 2021): This dataset consists of 8500 high-quality grade school math word problems created by human writ- ers. These multi-step problems require be- tween 2 and 8 steps to solve. GSM8K is usually used in benchmarking multi-step mathematical reasoning ability of LLMs.
In Table 5 we present a comparison of perfor- mance results from different models on these benchmarks. The metrics for CEVAL, CMMLU and MMLU are 5-shot accuracy, while for GSM8K it is 8-shot accuracy. Higher num- bers indicate better performance. It can be seen that our Skywork-13B achieves the high- est score on both the CEVAL and MMLU and
GSM8K benchmarks, with scores of 60.6 and 62.1 and 55.8 respectively. On the CMMLU benchmark, Baichuan2-13B achieves the high- est performance with a score of 62.0. | 2310.19341#32 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 33 | In summary, our Skywork model has demon- strated exceptional performance across a di- verse range of comprehensive benchmark tests. Results of individual subjects of the CEVAL can be found in Table 12. Results of other benchmarks can be found in Appendix C.
# 4.3 Language Modeling Results
# 4.3.1 LM as a solution to benchmark overfitting
Conventional benchmarks for evaluating LLMs often rely on static datasets of human- annotated examples. A core issue with this approach is that updating the test samples reg- ularly is difficult and costly. Over time, the static test sets tend to be overfitted, producing misleading benchmark results.
We propose language modeling evaluations as a compelling alternative. Perplexity in lan- guage modeling acts as a proxy metric strongly linked to performance on diverse downstream tasks (see Figure 1). Since language modeling solely requires unlabeled natural text, it elimi- nates the need for expensive human annotation. Constructing and revising language modeling test sets is low-cost, as new data can be readily sampled from newly published content. Ad- ditionally, if a test set becomes compromised, fresh test data can quickly be sampled as a replacement.
# 4.3.2 Construction of diverse LM testsets
We compare the language modeling capabilities of various language models with our Skywork- 13B, focusing on Chinese language. | 2310.19341#33 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 34 | # 4.3.2 Construction of diverse LM testsets
We compare the language modeling capabilities of various language models with our Skywork- 13B, focusing on Chinese language.
To conduct a robust evaluation of language modeling capability, we have separately col- lected a diverse corpus of texts from a myriad of websites, each labeled according to its respec- tive domain. The domains we cover span a wide spectrum, encompassing areas such as technol- ogy, movies, finance, to name a few. These domain-specific evaluation datasets have also been open-sourced for public access4.
4Github: https://github.com/SkyworkAI/ Skywork/tree/main/data/eval_loss
10
10
We ensure that every test sample consists of documents or user posts published after September 1, 2023. This cut-off date guar- antees that no test sample was inadvertently included during the pre-training of any eval- uated language model. Specifically, SkyPileâs cut-off date is June 30, 2023, and the majority of models under evaluation were released prior to August 31. | 2310.19341#34 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 35 | Note that while the held-out validation set used to monitor the training progress (as shown in Figure 3) of our model can also serve this pur- pose, it has the same distribution (web texts) as the bulk of the training corpus, thus may lead to overly optimistic estimate of the ac- tual language modeling capability of the model. More details on the sources of the test samples and the underlying data collection pipeline can be found in Appendix D.
4.3.3 Results The results of our language modeling eval- uation are presented in Table 6, where re- sults from ChatGLM3-6B (THUDM, 2023), MOSS-7B (Sun and Qiu, 2023), Baichuan2-7B (Baichuan Inc., 2023), Qwen-7B (Qwen Team, 2023), InternLM-7B (InternLM Team, 2023) and Aquilla2-34B are also included. | 2310.19341#35 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 36 | It can be seen that our Skywork-13B model shows the best performance overall, obtaining the lowest average perplexity score of 9.42. It also exhibits the best performance across indi- vidual domains, achieving the lowest perplexity scores in tech (11.58), movie (21.84), govern- It ment (4.76), and finance (4.92) domains. excels not only in surpassing the performance of models of a similar size, but also in out- performing significantly larger models such as InternLM-20B and Aquila2-34B.
We attribute the excellent language modeling performance of our Skywork-13B to the quality of our training corpus. Details on rigorous data filtering pipeline are described in Section 3.1.
# 5 Discussion
In this section, we delve into the benefits and as- sociated risks of pre-training on the in-domain data5 of benchmark tasks.
5The term âin-domain dataâ is a vague one that refers to any data with distribution closely resembling to that of the task data. For instance, the training data of a task is trivially in-domain data for that task. | 2310.19341#36 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 37 | Model CEVAL CMMLU MMLU GSM8K OpenLLaMA-13B LLaMA-13B LLaMA-2-13B Baichuan-13B Baichuan2-13B XVERSE-13B InternLM-20B 27.1 35.5 36.5 52.4 58.1 54.7 58.8 26.7 31.2 36.6 55.3 62.0 - - 42.7 46.9 54.8 51.6 59.2 55.1 62.0 12.4 17.8 28.7 26.6 52.8 - 52.6 Skywork-13B 60.6 61.8 62.1 55.8
Table 5: Comparison of results on popular benchmarks. Best result in each column is underlined. It can be seen that our Skywork-13B consistently perform well across the different benchmarks, indicating its overall robustness. | 2310.19341#37 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 38 | Tech Movie Gov. Game Finance General Average ChatGLM3-6B 12.48 MOSS-7B 20.83 13.43 InternLM-7B 13.39 Qwen-7B 12.89 Baichuan2-7B 23.48 39.66 24.9 25.16 23.26 5.07 11.08 5.88 5.55 5.34 18.45 31.24 19.78 19.26 18.36 5.67 10.59 6.17 5.76 5.68 7.47 13.25 8.10 7.78 7.62 10.25 18.50 11.17 10.83 10.41 23.26 LLaMA2-13B 12.55 Xverse-13B Baichuan-13B 12.38 Baichuan2-13B 12.14 11.90 Qwen-14B 12.34 InternLM-20B 14.62 Aquila2-34B 50.66 23.49 22.46 21.85 22.43 22.06 29.09 18.09 5.20 5.21 5.05 4.89 5.75 5.72 32.52 17.69 17.59 17.15 16.94 17.45 21.78 14.85 5.54 5.42 5.35 5.24 5.73 5.83 16.55 7.46 7.37 | 2310.19341#38 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 40 | Table 6: Comparative analysis of language modeling capabilities across diverse domains. Performance is measured using perplexity (lower values is better). Underlined figures correspond to the best result in each column.
# 5.1 Effect of pre-training on in-domain data
Pre-trained language models, or foundation models, are intended to be used in transfer learning as a general purpose backbone. As a foundation model in itself has little usage other than sentence completion, the quality of a foundation model is typically evaluated in terms of its performance in those tasks. Appar- ently, when it comes to improve a foundation modelâs quality as measured by its task perfor- mance, it is always far more efficient to train the model on in-domain data of that task (Her- nandez et al., 2021; Chung et al., 2022) , as
GPT-4 generated data with few-shot task examples can also be considered as in-domain data for that task.
compared to general-purpose data (web texts).
We have shown that Stage-2 pre-training sig- nificantly amplifies our Skywork-13Bâs STEM related capabilities, leading to a substantial improvement in performance on STEM-related tasks. Now we show that it is even possible to enhance a much weaker base model, i.e., an intermediate checkpoint, using only a fraction of the data and compute used in Stage-2 pre- training. | 2310.19341#40 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 41 | Table 7 presents the CEVAL and GSM8K scores before and after pre-training on in- domain data, utilizing a relatively weak model checkpoint that has only undergone 0.5T pre- training. The results indicate that after pre- training with merely 1B tokens of in-domain
11
CEVAL GSM8K En Loss Zh Loss Before After 28.3 50.8 6.9 40.7 1.86 2.09 2.08 2.21 â +22.5 +33.8 +0.23 +0.13
Table 7: The impact of pre-training on a 0.5T checkpoint of Skywork-13B using only 1B tokens. The training data is sourced from a subset of our SkyPile-STEM corpus. The columns âEn Lossâ and âZh Lossâ show the modelâs validation loss on held- out sets of English and Chinese web texts, respec- tively.
data, a weak model, initially performing only slightly better than random at CEVAL and GSM8K, can surpass the performance of our strongest Skywork-13B (3T) backbone without in-domain pre-training. However, this comes at the cost of significant degradation in lan- guage modeling performance, as evidenced by the higher loss on both tasks, shown in the two rightmost columns of the table. | 2310.19341#41 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 42 | # 5.2 Pre-training on in-domain data: a common practice?
It is of interest to explore whether popular foundational models are pre-trained on in- domain data. In pursuit of this, we delve into the GSM8K datasets, equipped with official train/test splits and comprehensive solutions. We evaluate an LLMâs language modeling loss on three datasets drawn from the same distri- bution: 1) The official GSM8K training set, 2) The official GSM8K test set, 3) A set composed of GSM8K-like samples generated by GPT-4. The corresponding losses are denoted as Ltrain, Ltest, and Lref , respectively. Theoretically, if a language model has not been exposed to any of the three datasets during pre-training, the three losses Ltrain, Ltest, and Lref should be ap- proximately equivalent. However, if the model has been pre-trained on the training set or if the test data has been inadvertently exposed during the pre-training process, we would an- ticipate a notable discrepancy between Ltrain, Ltest, and Lref . | 2310.19341#42 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 43 | Our results are outlined in Table 8, which also reports the differences in losses â1 = Ltest â Lref and â2 = Ltest â Ltrain. No- tably, the â2 column reveals that for most models, the language modeling loss on the GSM8K training and test splits are almost iden12
12
tical. However, models such as ChatGLM3-6B, Baichuan2-13B, Qwen-7B/14B, and Aquila2- 34B display markedly lower loss on the training split than on the test split. Consequently, we postulate that these models may have been con- siderably pre-trained on GSM8K training split or similar data.
Moreover, we notice one particular anomaly in the â1 column, indicating the significantly lower Ltest loss compared to Lref , which is interesting to further study for better under- standing.
# 5.3 Pre-Training or Supervised Fine-Tuning? | 2310.19341#43 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 44 | # 5.3 Pre-Training or Supervised Fine-Tuning?
In the era preceding the advent of LLMs such as GPT-4 (Bubeck et al., 2023; OpenAI, 2023) and Claude (Bai et al., 2022), supervised data for NLP tasks was generally scarce. This was because the process of data collection and an- notation was both time-consuming and costly. Due to the scarcity of supervised data, NLP researchers rely on unsupervised pre-training techniques (Mikolov et al., 2013; Peters et al., 2018; Radford et al., 2018; Devlin et al., 2019) to improve downstream task performance via transfer learning, where supervised data is to be used only in the fine-tuning stage. In this context, pre-training on in-domain (supervised) data was pointless, as it would defeat the pur- pose of pre-training itself (transfer learning). | 2310.19341#44 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 45 | This reality has significantly shifted, however, with the emergence of powerful LLMs. This is because procuring large amounts of high quality supervised/in-domain data is now as simple as making a few API requests to these LLMs, and it is comparatively low-cost (Wang et al., 2023; Taori et al., 2023). This new reality blurs the boundary between pre-training and supervised fine-tuning, making it feasible to incorporate substantial amounts of supervised data into the pre-training phase (Gunasekar et al., 2023; Li et al., 2023b). After all, curated in-domain data, whether written by human annotators or generated by LLM, are all form of human knowledge, and there is good reason for this knowledge to be absorbed into a foundation model.
That said, we believe that there is valid risk on the practice of targeted pre-training, in that it compromise fairness in benchmarking. While through pre-training on in-domain data a model | 2310.19341#45 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 46 | Ltest Ltrain Lref 0.99 0.78 1.49 1.52 1.27 1.12 1.10 0.64 1.36 1.42 ChatGLM3-6B 0.99 1.51 MOSS-7B InternLM-7B 1.21 1.07 Qwen-7B 1.41 Baichuan2-7B â1 0.0 0.02 -0.06 -0.03 0.05 â2 0.21 â0.01 0.09 0.43 â0.01 1.41 LLaMA-13B 1.36 LLaMA2-13B 1.42 Xverse-13B Baichuan-13B 1.41 Baichuan2-13B 1.09 Qwen-14B 1.03 1.20 InternLM-20B 0.78 Aquila2-34B 1.42 1.38 1.43 1.42 0.72 0.42 1.09 0.39 0.05 1.36 0.03 1.33 0.03 1.39 0.04 1.37 -0.03 1.12 -0.11 1.14 1.19 0.01 1.29 â0.51 0.01 1.00 â0.01 â0.01 â0.01 â0.01 0.37 0.61 0.11 | 2310.19341#46 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 48 | Table 8: We evaluate the language modeling (LM) loss on samples (a sample is a concatenation of question and answer) from GSM8K dataset for several foundation models. For each LLM, we compare LM loss on the training split (Ltrain), the test split (Ltest), and a specially curated reference set (Lref ), generated by GPT-4, designed to mimic the GSM8K dataset. We also reports two key metrics: â1 = Ltest â Lref , serving as an indicator of potential test data leakage during the training of the LLM, i.e., a lower value suggests possible leakage; and â2 = Ltest â Ltrain, which measures the degree of overfitting on the training split of the dataset. A higher value of â2 implies excessive overfitting. Outliers for both â1 and â2 are highlighted in gray.
may excel at specific tasks, it remains uncertain how well it would perform on unseen tasks. Its capabilities may be overestimated based on the benchmark alone, which can lead to unfair comparisons between models and mislead users or stakeholders about the true capabilities of the model.
eling perplexity over a given data distribution may predict performance on some tasks, it may not translate to other tasks. The correlation between language modeling and downstream performance could vary across different distri- butions and tasks.
# 7 Conclusion
# 6 Limitation | 2310.19341#48 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 49 | # 7 Conclusion
# 6 Limitation
Our pre-training approach for Skywork-13B in- volved a two-stage process: general purpose pre- training followed by domain-specific enhance- ment pre-training. However, it remains unclear whether this methodology can produce a model on par with, or superior to, a model trained in one stage on a mixed corpus. Further investi- gation is needed to determine the comparative effectiveness of these pre-training approaches. Additionally, we have proposed using lan- guage modeling loss or perplexity as proxy met- rics for monitoring and evaluating large lan- guage models. A limitation is that language modeling evaluation relies on the specific distri- bution used to sample test data, of which there are infinite possibilities. While language modOur work on Skywork-13B represents a sig- nificant leap forward in the development of open large language models. We believe that our comprehensive and transparent approach to the modelâs development will be a valuable resource for researchers in the field, fostering collaboration and open-source principles. Our two-stage training methodology, leveraging a segmented corpus, offers a novel approach for enhancing model capability in specific domain, while our method of monitoring the training progress provides a practical solution to the challenges of tracking the improvement of these models over time. | 2310.19341#49 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 50 | However, our work is more than just the cre- ation of a new LLM. It is a call to action for the broader NLP community, urging a return to
13
13
the principles of fairness, transparency, and the sharing of ideas that have historically fueled progress in the field. We hope that Skywork- 13B will not only serve as a powerful tool for a wide range of applications but also inspire a renewed commitment to openness and coopera- tion in the development of future models.
# References
Amro Abbas, Kushal Tirumala, Dániel Simig, Surya Ganguli, and Ari S. Morcos. 2023. Semdedup: Data-efficient learning at web-scale through semantic deduplication.
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403. | 2310.19341#50 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 51 | Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Her- nandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. 2022. Training a helpful and harm- less assistant with reinforcement learning from human feedback.
Baichuan Inc. 2023. Baichuan 2: large-scale //github.com/baichuan-inc/Baichuan2/blob/ main/README_EN.md. language models. Open https:
Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jian- feng Gao, and Yejin Choi. 2019. Piqa: Reasoning about physical commonsense in natural language. | 2310.19341#51 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 52 | Sébastien Bubeck, Varun Chandrasekaran, Ro- nen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. 2023. Sparks of artificial general intelligence: Early experi- ments with gpt-4.
Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023. Extending context window of large language models via positional interpolation.
Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. 2016. Training deep nets with sublin- ear memory cost.
Aakanksha Chowdhery, Sharan Narang, Jacob De- vlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. | 2310.19341#52 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 53 | Hyung Won Chung, Le Hou, Shayne Longpre, Bar- ret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction- finetuned language models.
Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surpris- ing difficulty of natural yes/no questions.
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457. | 2310.19341#53 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 54 | Karl Cobbe, Vineet Kosaraju, Mohammad Bavar- ian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems.
Tri Dao. 2023. Flashattention-2: Faster attention with better parallelism and work partitioning.
Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. Flashattention: Fast and memory-efficient exact attention with io- awareness.
Grégoire Delétang, Anian Ruoss, Paul-Ambroise Duquenne, Elliot Catt, Tim Genewein, Christo- pher Mattern, Jordi Grau-Moya, Li Kevin Wen- liang, Matthew Aitchison, Laurent Orseau, Mar- cus Hutter, and Joel Veness. 2023. Language modeling is compression. | 2310.19341#54 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 55 | Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 4171â4186, Minneapo- lis, Minnesota. Association for Computational Linguistics.
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno,
14
Sivakanth Gopi, Mojan Javaheripi, Piero Kauff- mann, Gustavo de Rosa, Olli Saarikivi, et al. 2023. Textbooks are all you need. arXiv preprint arXiv:2306.11644.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. | 2310.19341#55 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 56 | Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding.
Danny Hernandez, Tom Brown, Tom Conerly, Nova DasSarma, Dawn Drain, Sheer El-Showk, Nel- son Elhage, Zac Hatfield-Dodds, Tom Henighan, Tristan Hume, Scott Johnston, Ben Mann, Chris Olah, Catherine Olsson, Dario Amodei, Nicholas Joseph, Jared Kaplan, and Sam McCandlish. 2022. Scaling laws and interpretability of learn- ing from repeated data.
Danny Hernandez, Jared Kaplan, Tom Henighan, and Sam McCandlish. 2021. Scaling laws for transfer.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training compute-optimal large language models. | 2310.19341#56 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 57 | Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023. C- eval: A multi-level multi-discipline chinese evalu- ation suite for foundation models. arXiv preprint arXiv:2305.08322.
InternLM Team. 2023. Internlm: A mul- language model with progressively https://github.com/ tilingual enhanced capabilities. InternLM/InternLM.
Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for read- ing comprehension. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1601â1611, Vancouver, Canada. Associa- tion for Computational Linguistics.
Nikhil Kandpal, Eric Wallace, and Colin Raffel. 2022. Deduplicating training data mitigates pri- vacy risks in language models. | 2310.19341#57 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 58 | Nikhil Kandpal, Eric Wallace, and Colin Raffel. 2022. Deduplicating training data mitigates pri- vacy risks in language models.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models.
Vijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Moham- mad Shoeybi, and Bryan Catanzaro. 2022. Re- ducing activation recomputation in large trans- former models.
Taku Kudo and John Richardson. 2018. Sentence- Piece: A simple and language independent sub- word tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66â 71, Brussels, Belgium. Association for Computa- tional Linguistics.
Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale read- ing comprehension dataset from examinations. arXiv preprint arXiv:1704.04683. | 2310.19341#58 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 59 | Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison- Burch, and Nicholas Carlini. 2022. Deduplicating training data makes language models better.
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. 2023a. Cmmlu: Measuring massive multitask language understanding in chinese.
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Al- lie Del Giorno, Suriya Gunasekar, and Yin Tat Textbooks are all you need Lee. 2023b. ii: phi-1.5 technical report. arXiv preprint arXiv:2309.05463.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization.
Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. | 2310.19341#59 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 60 | Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space.
Niklas Muennighoff, Alexander M. Rush, Boaz Barak, Teven Le Scao, Aleksandra Piktus, Noua- mane Tazi, Sampo Pyysalo, Thomas Wolf, and Colin Raffel. 2023. Scaling data-constrained lan- guage models.
Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Anand Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, Amar Phanishayee, and Matei Za- haria. 2021. Efficient large-scale language model training on gpu clusters using megatron-lm.
OpenAI. 2023. GPT-4 technical report. | 2310.19341#60 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 61 | OpenAI. 2023. GPT-4 technical report.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kel- ton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback.
15
Guilherme Penedo, Quentin Malartic, Daniel Hess- low, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Al- mazrouei, and Julien Launay. 2023. The refined- web dataset for falcon llm: outperforming cu- rated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116. | 2310.19341#61 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 62 | Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextual- ized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies, Volume 1 (Long Papers), pages 2227â2237, New Orleans, Louisiana. Association for Computational Lin- guistics.
Qwen Team. 2023. QWEN technical report. https: //github.com/QwenLM/Qwen.
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training.
Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2020. Zero: Memory optimiza- tions toward training trillion parameter models. | 2310.19341#62 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 63 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Dé- fossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve. 2023. Code llama: Open foundation models for code.
Keisuke Sakaguchi, Ronan Le Bras, Chandra Bha- gavatula, and Yejin Choi. 2021. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99â106.
Noam Shazeer. 2020. Glu variants improve trans- former.
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2020. Megatron-lm: Training multi- billion parameter language models using model parallelism. | 2310.19341#63 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 64 | Daria Soboleva, Faisal Al-Khateeb, Robert Myers, Jacob R Steeves, Joel Hestness, and Nolan Dey. 2023. SlimPajama: A 627B token cleaned and deduplicated version of RedPajama.
Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Mur- tadha, Bo Wen, and Yunfeng Liu. 2022. Ro- former: Enhanced transformer with rotary posi- tion embedding.
16
Tianxiang Sun and Xipeng Qiu. 2023. MOSS. https://github.com/OpenLMLab/MOSS/blob/main/ README_en.md.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/ stanford_alpaca.
Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. 2022. Galactica: A large lan- guage model for science. | 2310.19341#64 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 65 | THUDM. 2023. ChatGLM3-6B. https://github. com/THUDM/ChatGLM3 Webpage in Chinese.
Together Computer. 2023. Redpajama: An open source recipe to reproduce llama training dataset.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foun- dation language models.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhar- gava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. | 2310.19341#65 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 66 | Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPSâ17, page 6000â6010, Red Hook, NY, USA. Curran Associates Inc.
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. Self-instruct: Align- ing language models with self-generated instruc- tions.
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Francisco Conneau, Vishrav Chaudhary, Guzmán, Armand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality mono- lingual datasets from web crawl data. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4003â4012, Marseille, France. European Language Resources Association. | 2310.19341#66 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 67 | Mengzhou Xia, Mikel Artetxe, Chunting Zhou, Xi Victoria Lin, Ramakanth Pasunuru, Danqi Chen, Luke Zettlemoyer, and Ves Stoyanov. 2023. Training trajectories of language models across scales.
Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Mar- tin, Rashi Rungta, Karthik Abinav Sankarara- man, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Ma- lik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, and Hao Ma. 2023. Effective long-context scaling of foundation mod- els.
Xverse-AI. 2023. Xverse-13B. https://github.com/ xverse-ai/XVERSE-13B Webpage in Chinese. | 2310.19341#67 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 68 | Xverse-AI. 2023. Xverse-13B. https://github.com/ xverse-ai/XVERSE-13B Webpage in Chinese.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Pro- ceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 4791â4800, Florence, Italy. Association for Com- putational Linguistics.
Biao Zhang and Rico Sennrich. 2019. Root Mean Square Layer Normalization. In Advances in Neural Information Processing Systems 32, Van- couver, Canada.
# A Details on GPT-7B vs. LLaMA-7B Experiment
In a preliminary experiment, we compared the language modeling performance between GPT and LLaMA architecture in a controlled envi- ronment. We trained a 7B model with GPT architecture and a comparable 7B model with LLaMA architecture for 200B tokens sampled from the same corpus and with the same train- ing parameters. Details are given in Table 9.
# B Preliminary Experiments on Distributed Training | 2310.19341#68 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 69 | # B Preliminary Experiments on Distributed Training
In Table 10 we report preliminary results ob- tained with various distributed training con- figurations on LLaMA-13B and Skywork-13B model architecture. In both cases, the best throughput is achieved with DP256 and PP2 with ZERO-1 setting.
# C More Benchmark Results
We also provide results of the following bench- marks in Table 11:
⢠TriviaQA (Joshi et al., 2017): TriviaQA is a realistic text-based question answer- ing dataset which includes 950K question- answer pairs from 662K documents collected from Wikipedia and the web.
17
⢠HellaSwag (Zellers et al., 2019): HellaSWAG is a dataset that focuses on grounded com- monsense inference.
⢠Winogrande (Sakaguchi et al., 2021): Wino- Grande is a dataset that focuses on com- monsense reasoning.
⢠BoolQ (Clark et al., 2019) BoolQ is a ques- tion answering dataset for yes/no questions.
⢠PIQA (Bisk et al., 2019): PIQA is a dataset for commonsense reasoning, and was cre- ated to investigate the physical knowledge of existing models in NLP.
ARC is a dataset consisting of multiple-choice question-answering tasks that focus on com- monsense reasoning. | 2310.19341#69 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 70 | ARC is a dataset consisting of multiple-choice question-answering tasks that focus on com- monsense reasoning.
⢠RACE (Lai et al., 2017) RACE is a dataset that focuses on reading comprehension.
# D Details on LM Test Sets
We established a daily crawl of published arti- cles and user posts from a selection of widely used Chinese websites. This data collection process is distinct from the pipeline utilized to construct SkyPile. The purpose of gather- ing this data is to create independent language modeling test sets, categorized by their domain, for the evaluation of current open Language Learning Models (LLMs).
Below we describe the sources of these do- main testsets:
⢠Technology: AI related articles from (36kr. com). This website provides timely and comprehensive news articles about startups, technology, and business trends, primarily in the Chinese market.
⢠Movie: User written movie reviews from Douban (douban.com). Douban is a popular social networking service in China that offers a platform for users to share their opinions and create content related to movies, books, and music. It is one of the most influential web 2.0 websites in China and has a strong focus on user-generated content.
⢠Government: News from website of Peo- pleâs Daily (www.people.com.cn), which is the | 2310.19341#70 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 71 | ⢠Government: News from website of Peo- pleâs Daily (www.people.com.cn), which is the
Positional Embedding Max Position Embeddings Normalization Activation Attention Num. Layers Hidden Size Num. Heads FFN Size Context Size Absolute 4096 Rotary 4096 LayerNorm RMSNorm Gelu MHA 32 4096 32 16384 4096 SwiGlu MHA 32 4096 32 11008 4096 Global Batch Size Adam β1 Adam β2 Adam ϵ Precision Peak Learning Rate Min Learning Rate Learning Rate Decay Steps Learning Rate Decay Style Warm-up Steps Weight Decay Dropout Probability Gradient Clip Total Steps 1024 0.95 0.9 1.00e-8 bf16 3e-4 3e-5 43945 Cosine 2000 steps 0.1 0.1 1 51200 1024 0.95 0.9 1.00-8 bf16 3e-4 3e-5 43945 Cosine 2000 steps 0.1 0 1 51200
Table 9: Comparison of GPT-7B and LLaMA-7B. All variables are controlled in our experiment except for the differences in architecture. | 2310.19341#71 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 72 | Table 9: Comparison of GPT-7B and LLaMA-7B. All variables are controlled in our experiment except for the differences in architecture.
Model Strategy Throughput MFU TFlops Memory LLaMA2 DP512 LLaMA2 DP256+PP2 LLaMA2 DP256+TP2 LLaMA2 DP128+TP2+PP2 LLaMA2 DP128+PP4 LLaMA2 DP128+TP4 - 2045 1928 1936 1964 1744 - 58.5 55.2 55.4 56.2 44.4 - 182.6 172.2 172.9 175.4 138.5 OOM 70.7 65.5 39.4 53.4 35.4 Skywork DP512 Skywork DP256+PP2 Skywork DP256+TP2 Skywork DP128+TP2+PP2 Skywork DP128+PP4 Skywork DP128+TP4 - 1873 1775 1776 1828 1417 - 56.5 53.5 53.5 55.1 43.1 - 176.2 167.0 167.0 171.9 134.6 OOM 77.1 67.9 42.5 58.7 36.6 | 2310.19341#72 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 73 | Table 10: Compute effeciency achieved with different distributed training configurations. We tested both LLaMA2-13B and Skywork-13B. Throughout the experiments, we use a global batch size of 4096 and a micro batch size of 1. When Tensor Parallelism is enabled, Sequence Parallelism is enabled as well. Throughput is measured in tokens processed per GPU per second, while Model Flops Utilization (MFU) is expressed as a percentage (%). Memory usage is reported in Gigabytes (GB).
18 | 2310.19341#73 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 74 | 18
Models BoolQ PIQA Winogrande TriviaQA RACE Hellaswag ARC-E ARC-C OpenLLaMA-13B LLaMA-13B LLaMA2-13B Baichuan-13B Baichuan2-13B Xverse-13B 77.6 80.7 83.3 78.8 80.3 79.8 79.5 81.0 81.7 77.2 79.3 80.0 72.0 76.2 75.8 70.4 72.1 71.1 60.2 65.0 68.2 51.6 58.0 53.3 42.4 43.4 43.9 35.8 25.2 43.2 76.0 80.1 81.5 74.2 76.4 77.2 78.9 82.1 83.7 77.2 81.1 78.5 Skywork-13B 82.9 79.9 72.2 54.0 45.2 77.4 78.5 48.6 54.7 57.0 48.4 53.2 49.1 50.2
Table 11: More English benchmarks results. As all of these models are more or less sensitive to the prompt template or number of shots, the reported results, which are reproduced by us, may be different to those from other sources. | 2310.19341#74 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 75 | most influential and authoritative newspa- pers in China. The language used in the news is typically formal Standard Mandarin and carries an authoritative tone.
⢠Game: Articles from Gcores (www.gcores. com). This is a Chinese digital media plat- form dedicated to video games, tech trends, and geek culture. The platform features a wide range of original content, including news articles, podcast episodes, videos, and independent games.
⢠Finance: News from finance section of Sina It is one of Chinaâs (finance.sina.com.cn). leading online media companies, offers a comprehensive suite of financial information and services. It covers a broad range of topics including stock markets, forex, com- modities, real estate, and personal finance.
⢠General: News from Jiemian News (www. jiemian.com). Jiemian is a prominent Chi- nese digital media platform known for its in-depth and high-quality journalism. It cov- ers a wide range of topics, including politics, economy, culture, technology, finance, and lifestyle.
19
19
Subject Stage-1 Stage-2 Boost | 2310.19341#75 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 76 | 19
19
Subject Stage-1 Stage-2 Boost
Accountant Advanced Mathematics Art Studies Basic Medicine Business Administration Chinese Language and Literature Civil Servant Clinical Medicine College Chemistry College Economics College Physics College Programming Computer Architecture Computer Network Discrete Mathematics Education Science Electrical Engineer Environmental Impact Assessment Engineer Fire Engineer High School Biology High School Chemistry High School Chinese High School Geography High School History High School Mathematics High School Physics High School Politics Ideological and Moral Cultivation Law Legal Professional Logic Mao Zedong Thought Marxism Metrology Engineer Middle School Biology Middle School Chemistry Middle School Geography Middle School History Middle School Mathematics Middle School Physics Middle School Politics Modern Chinese History Operating System Physician Plant Protection Probability and Statistics Professional Tour Guide Sports Science Tax Accountant Teacher Qualification Urban and Rural Planner Veterinary Medicine | 2310.19341#76 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.19341 | 79 | 8.2 15.8 12.1 15.8 6.1 8.7 25.5 4.5 12.5 -5.5 21.1 0.0 19.0 5.3 -31.3 31.0 0.0 6.5 6.5 36.8 26.3 15.8 42.1 0.0 -11.1 15.8 36.8 15.8 12.5 13.0 -4.5 12.5 5.3 20.8 19.0 65.0 41.7 22.7 21.1 31.6 38.1 26.1 -5.3 10.2 0.0 5.6 -3.4 10.5 18.4 22.7 17.4 34.8
Table 12: Details on CEVAL benchmark results.
20
20
# BoolQ
775 - 75.0 - 72.5 - 70.0 - 67.5 - 65.0 - 62.5 - 60.0 - 0 1000 2000 3000 Winogrande 70 - 65- 60 - 55 - 50 - 1 1 1 1 0 1000 2000 3000 RACE 42.5 - 40.0 - 37.5 - 35.0- 32.5 - 30.0 - 27.5 - 0 1000 2000 3000 Tokens (B) | 2310.19341#79 | Skywork: A More Open Bilingual Foundation Model | In this technical report, we present Skywork-13B, a family of large language
models (LLMs) trained on a corpus of over 3.2 trillion tokens drawn from both
English and Chinese texts. This bilingual foundation model is the most
extensively trained and openly published LLMs of comparable size to date. We
introduce a two-stage training methodology using a segmented corpus, targeting
general purpose training and then domain-specific enhancement training,
respectively. We show that our model not only excels on popular benchmarks, but
also achieves \emph{state of the art} performance in Chinese language modeling
on diverse domains. Furthermore, we propose a novel leakage detection method,
demonstrating that test data contamination is a pressing issue warranting
further investigation by the LLM community. To spur future research, we release
Skywork-13B along with checkpoints obtained during intermediate stages of the
training process. We are also releasing part of our SkyPile corpus, a
collection of over 150 billion tokens of web text, which is the largest high
quality open Chinese pre-training corpus to date. We hope Skywork-13B and our
open corpus will serve as a valuable open-source resource to democratize access
to high-quality LLMs. | http://arxiv.org/pdf/2310.19341 | Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, Chenxia Li, Liu Yang, Xilin Luo, Xuejie Wu, Lunan Liu, Wenjun Cheng, Peng Cheng, Jianhao Zhang, Xiaoyu Zhang, Lei Lin, Xiaokun Wang, Yutuan Ma, Chuanhai Dong, Yanqi Sun, Yifu Chen, Yongyi Peng, Xiaojuan Liang, Shuicheng Yan, Han Fang, Yahui Zhou | cs.CL, cs.AI | null | null | cs.CL | 20231030 | 20231030 | [
{
"id": "2309.05463"
},
{
"id": "1803.05457"
},
{
"id": "2204.02311"
},
{
"id": "2305.10403"
},
{
"id": "2306.11644"
},
{
"id": "2305.08322"
},
{
"id": "2307.09288"
},
{
"id": "1704.04683"
},
{
"id": "2306.01116"
}
] |
2310.18018 | 0 | 3 2 0 2
t c O 7 2 ] L C . s c [
1 v 8 1 0 8 1 . 0 1 3 2 : v i X r a
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark Oscar Sainz1 Jon Ander Campos2 Iker GarcÃa-Ferrero1 Julen Etxaniz1 Oier Lopez de Lacalle1 Eneko Agirre1 1 HiTZ Center - Ixa, University of the Basque Country UPV/EHU {oscar.sainz,iker.graciaf,julen.etxaniz}@ehu.eus {oier.lopezdelacalle,e.agirre}@ehu.eus 2 Cohere [email protected]
# Abstract | 2310.18018#0 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 1 | # Abstract
In this position paper, we argue that the classi- cal evaluation on Natural Language Processing (NLP) tasks using annotated benchmarks is in trouble. The worst kind of data contamination happens when a Large Language Model (LLM) is trained on the test split of a benchmark, and then evaluated in the same benchmark. The ex- tent of the problem is unknown, as it is not straightforward to measure. Contamination causes an overestimation of the performance of a contaminated model in a target benchmark and associated task with respect to their non- contaminated counterparts. The consequences can be very harmful, with wrong scientific con- clusions being published while other correct ones are discarded. This position paper de- fines different levels of data contamination and argues for a community effort, including the development of automatic and semi-automatic measures to detect when data from a bench- mark was exposed to a model, and suggestions for flagging papers with conclusions that are compromised by data contamination.
et al., 2020) the need for data has been solved by crawling the internet, reaching trillions of tokens (Touvron et al., 2023a), and making it very hard to know whether a specific benchmark was used to train the LLM. This is applicable to all models, even if they document the source of the data at a high level, but especially for closed models with no or insufficient documentation. | 2310.18018#1 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 2 | Data contamination has two consequences. The first one is that the performance of an LLM when evaluated on a benchmark it already processed dur- ing pre-training will be overestimated, causing it to be preferred with respect to other LLMs. This affects the comparative assessment of the quality of LLMs. The second is that papers proposing sci- entific hypotheses on certain NLP tasks could be using contaminated LLMs, and thus make wrong claims about their hypotheses, and invalidate alter- native hypotheses that could be true. This second consequence has an enormous negative impact on our field and is our main focus.
1
# Introduction | 2310.18018#2 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 3 | 1
# Introduction
At the core of NLP as a discipline, there is rigor- ous evaluation on different tasks. The experimental protocols involve strict control over the data, espe- cially test data, which needs to be totally unseen during development, but also over training and de- velopment data. This is essential to assess the per- formance of a model in zero-shot, few-shot, or fully supervised settings. Since fine-tuning and prompt- ing of Large Language Models (LLMs) became commonplace (Min et al., 2021) it has been increas- ingly difficult to enforce those strict protocols. Pre- training LLMs is expensive, and therefore, most of the time, researchers use LLMs trained by third- party entities (Raffel et al., 2020; Touvron et al., 2023a), which are agnostic to the target tasks where those LLMs are going to be used. With the grow- ing scale of LLMs (Kaplan et al., 2020; Henighan | 2310.18018#3 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 4 | There are several measures that the community could take. A possible solution would be to avoid all research involving datasets which include pub- lished test data, and focus on datasets where the test data labels are not public. This solution will severely affect the number of NLP tasks for which benchmarks exist, at least until new benchmarks that avoid data leakage are produced. Jacovi et al. (2023) presents preventative strategies to avoid con- tamination in the future.
In this position paper, we propose a complemen- tary line of action which seeks to measure and doc- ument data contamination cases, specifying LLM, benchmark and evidence supporting contamination. This solution involves a registry of contamination cases1, collaborative manual work and research on automatic approaches. In addition, conferences should devise mechanisms to ensure that papers
1Such as the LM Contamination Index https:// hitz-zentroa.github.io/lm-contamination/
donât include conclusions involving contamination, and to flag past work where contamination has been discovered after publication.
The paper starts by introducing background, fol- lowed by a definition of data contamination, con- tamination at different steps, methods to measure data contamination and a call for action.
# 2 Background | 2310.18018#4 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 5 | The paper starts by introducing background, fol- lowed by a definition of data contamination, con- tamination at different steps, methods to measure data contamination and a call for action.
# 2 Background
Detection of contamination cases has been tradi- tionally done by directly analyzing the training data (Dodge et al., 2021), but the current scale of the pre-training data makes it difficult (Kreutzer et al., 2022; Birhane et al., 2021). Without proper doc- umentation and search tools like ROOTS (Piktus et al., 2023) it is very difficult for any researcher to actually know whether their datasets are compro- mised on a given model. More recently, this task became even harder, as the best-performing LLMs are deployed as products, and therefore, their train- ing corpora are kept secret. In this case, it has been shown that the high memorization abilities of LLMs can be used to generate portions of the train- ing texts (Carlini et al., 2021; Magar and Schwartz, 2022). Using this memorization property, Sainz et al. (2023) show that ChatGPT generates portions of popular NLP benchmarks. Furthermore, LLMs memorization has been studied on data-leakage scenarios (Elangovan et al., 2021). | 2310.18018#5 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 6 | Regarding data contamination cases, Dodge et al. (2021) exposed that the C4 corpus (Raf- fel et al., 2020), a corpus used to pre-train sev- eral LLMs such as T5 (Raffel et al., 2020), con- tained the test splits of several benchmarks that were crawled from GitHub. Moreover, Brown et al. (2020) acknowledged a bug in their filter- ing script that caused the contamination of several benchmarks during the GPT-3 training. Further- more, OpenAI (2023) stated that parts of the BIG- bench (Srivastava et al., 2023) benchmark were inadvertently mixed into the training set, enough to stop them from evaluating the model on it. They also mention that they included parts of the training sets of MATH (Hendrycks et al., 2021) and GSM- 8K (Cobbe et al., 2021) as training data to improve mathematical reasoning (OpenAI, 2023). There- fore, the performance results reported for GSM-8K cannot be taken as zero-shot results when compared to other models. | 2310.18018#6 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 7 | Recently, Sainz et al. (2023) reported that several benchmarks have already been comincluding the popular promised in ChatGPT, CoNLL2003 (Tjong Kim Sang and De Meulder, 2003). There are several preprints that evaluate ChatGPT on CoNLL03 (Wei et al., 2023; Li et al., 2023a; Han et al., 2023) and at least one confer- ence paper published on ACL 2023 that evaluates GPT-3 (Brown et al., 2020) and Codex (Chen et al., 2021) on the same benchmark (Li et al., 2023b). Appendix A shows evidence for data contamination for those LLMs, and casts doubts on the conclu- sions of those papers.
# 3 Defining data contamination
In general, data contamination refers to any breach in the strict control of datasets required by the ex- perimental protocol. In this paper, we focus on the specific case where a LLM has processed the eval- uation benchmark during its pre-training. However, different types of contamination exist and each of them has different implications. In this section, we present three types of contamination: guideline, text and annotation. | 2310.18018#7 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 8 | Guideline contamination happens when the an- notation guidelines for a specific dataset are seen by the model. Usually, for specialized annotations, highly detailed guidelines are required. The guide- lines can usually be publicly found on the internet, even for datasets that are not public or require buy- ing a license for their use, ACE05 (Walker et al., 2006) for example. The more details the guide- lines have the more information and examples they provide. A model aware of the guidelines for a spe- cific task or dataset has advantages over a model without such information. We should consider the guideline contamination, especially on zero and few-shot evaluations.
Raw text contamination happens when the orig- inal text (previous to annotation) is seen by the model. Some examples of this type of contami- nation are the datasets based on Wikipedia texts. Wikipedia is commonly used as a source of pre- training data, but, it is also a frequent source of text to create new datasets. MultiCoNER 2 (Fetahu et al., 2023), a Named Entity Recognition dataset based on Wikipedia links and Wikidata informa- tion, is an example of this phenomenon. Models that have already seen Wikipedia in its original form (including the markup annotations) have more information to better identify a part of the annota- tions (the entity boundaries) of the dataset. As | 2310.18018#8 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 9 | pointed out by Dodge et al. (2021), other datasets built from the web such as IMDB (Maas et al., 2011) and CNN/DailyMail (Hermann et al., 2015) can be also compromised. This kind of contamina- tion should be taken into account when developing automatically annotated datasets.
Annotation contamination happens when the annotations (labels) of the target benchmark are exposed to the model during training. Depending on the splits of the benchmark that have been ex- posed, we can have the following cases: (1) When the evaluation split is involved, the experiment is completely invalidated. This is the most harmful level of contamination. (2) When the train or de- velopment splits are involved, this would not affect comparisons with other models that have been de- veloped using those same splits, but it does inval- idate conclusions claiming zero-shot or few-shot performance.
# 4 Contamination on different steps
Currently, the standard procedure to train and de- ploy language models has three main steps: pre- training a language model, fine-tuning the model to follow instructions and/or align with human feed- back; and an iterative improvement step after de- ployment. Data contamination does not only occur in the pre-training step of LLMs, but can occur later in the training pipeline.
# 4.1 Contamination during pre-training | 2310.18018#9 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 10 | # 4.1 Contamination during pre-training
During the pre-training, there is a high chance that undesired data is fed to the model. Gathering huge amounts of text from the internet also has its coun- terpart: it becomes very hard to filter undesired data completely, and even deduplication is chal- lenging (Lee et al., 2022). Avoiding data contam- ination completely is not realistic, as it is impos- sible to know every dataset that the research com- munity can test an LLM on. However, allowing the researchers to access and perform queries on the pre-training data may ensure that no corrupted evaluations are performed. In fact, keeping the pre-training data not available for LLM consumers may derive undesired influences on downstream tasks (Li et al., 2020; Gehman et al., 2020; Groen- wold et al., 2020).
In addition, researchers building LLMs should avoid, at least, contamination from well-known standard benchmarks such as GLUE (Wang et al., 2018) or SuperGLUE (Wang et al., 2020). As
Dodge et al. (2021) showed, see their Table 2, various standard benchmarks were found in the C4 (Raffel et al., 2020) corpus.
# 4.2 Contamination on supervised fine-tuning | 2310.18018#10 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 11 | # 4.2 Contamination on supervised fine-tuning
The supervised fine-tuning or instruction-tuning step is another step where contamination can oc- cur. Nevertheless, it is much less frequent as it is a required practice in the research community to document the training data in order to publish your findings. As an example of those, we can find the FLAN dataset collection (Longpre et al., 2023), OPT-IML Bench (Iyer et al., 2023), Super- Natural Instructions (Wang et al., 2022b), the P3 collection (Bach et al., 2022) and so on. | 2310.18018#11 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 12 | Recently, more and more machine-generated text is being used to fine-tune language models. Some examples of these are Self-Instruct (Wang et al., 2022a), Unnatural Instructions (Honovich et al., 2022), Alpaca Data (Taori et al., 2023) and ShareGPT (Chiang et al., 2023). The aim of those datasets is usually to make public and smaller white-box models imitate black-box mod- els such as ChatGPT (Gu et al., 2023). However, the distillation of a closed teacher model with clear signs of contamination is an issue. More alarm- ing, is the case that popular crowd-sourcing meth- ods like MTurk have started using LLMs to gener- ate data that was supposed to be manually gener- ated (Veselovsky et al., 2023).
# 4.3 Contamination after deployment | 2310.18018#12 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 13 | # 4.3 Contamination after deployment
The last step where the models can be exposed to contamination is applied mostly on LLMs as ser- vice products. With the recent improvements in the quality of LLMs, the models that were supposed to be part of bigger products become products by themselves (ChatGPT or Bard for example). It is worth noting that, although they are closed models, i.e. no information is known about the architec- ture or training details, the research community has evaluated them on standard benchmarks (Jiao et al. (2023); among others). The monetary success of closed systems is closely tied to the performance of the model. Therefore, companies have a strong incentive to audit user inputs and retrain their sys- tem when the performance in a task is determined to be poor. Those models that are actually being ac- cessed via API calls have been iteratively improved with user input, leading to evaluation data exposure. As a result, the models became aware of the testing data, at the point that you can easily recreate the
dataset as we discuss in Section 5.2 (see examples in Appendix A).
# 5 Measuring data contamination | 2310.18018#13 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 14 | dataset as we discuss in Section 5.2 (see examples in Appendix A).
# 5 Measuring data contamination
For the reasons we already mentioned, it is nec- essary to measure the existent data contamination cases and to document relevant contamination ev- idence. In order to achieve this goal, we differen- tiate two cases. In the first case, we would have open models where there is public access to all the training data, including text used in pre-training, but also, if the LLM was trained on them, instruc- tion tuning datasets and deployment datasets. In the second case, we would have closed models for which there is no access to training data.
# 5.1 Open LLMs | 2310.18018#14 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 15 | # 5.1 Open LLMs
Most of the research on data contamination has been focused on analyzing pre-training data with string-matching operations (Dodge et al., 2021), as this provides direct evidence that the LLM was contaminated. Pre-training datasets are unwieldy large, and string-matching operations can be very slow at this scale. Therefore, several tools for data auditing have been released recently: The ROOTS Search Tool (Piktus et al., 2023) and Data Por- traits (Marone and Durme, 2023) among others. As an example of their usefulness, Piktus et al. (2023) found that BLOOM (Workshop et al., 2023) should not be evaluated on XNLI (Conneau et al., 2018) due to contamination. These tools should be made available for all open LLMs, in order to allow for contamination case discovery.
In addition, there is no currently agreed-upon methodology to measure the level of contamina- tion. For cases where the full benchmark is not found, we propose to measure the level of data con- tamination using benchmark data overlap, that is, the percentage of the benchmark that can be found in the pre-training dataset (Dodge et al., 2021; Pik- tus et al., 2023).
# 5.2 Closed LLMs | 2310.18018#15 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 16 | # 5.2 Closed LLMs
Despite most of the recent popular models like LLaMA (Touvron et al., 2023a), GPT-4 (Ope- nAI, 2023) or Bard have not publicly released their pre-training data, very few works have actu- ally worked on detecting data-contamination when the pre-training data is not available (Magar and Schwartz, 2022). Although this scenario is much more challenging than the former, we foresee that
it will become the most prevalent. Developing methods to measure the data contamination in this scenario must be crucial for future evaluations. To tackle this problem, we propose to take advantage of LLMâs memorization capabilities. Appendix A shows some examples of using memorization to uncover data contamination for the CONLL2003 benchmark on three LLMs. In cases where the LLM does not produce the benchmark verbatim, it is left to the auditor to examine the output and judge whether the evidence supports contamination. The process is totally manual and could be scaled in a community effort. | 2310.18018#16 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 17 | Alternatively, automatic metrics for measuring data contamination levels could be developed. As an initial step in this direction, we reuse and adapt the extractability definition presented in Carlini et al. (2023) for defining memorization. We define that an example s is extractable from evaluation dataset d and model m if there exists a sequence of k examples x immediately preceding s in d data such that s is generated when prompting model m with x. We can define the degree of contamination of model m for dataset d as the ratio of extractable examples with respect to the total number of exam- ples in the dataset.
One further question remains to be solved which is whether the lack of memorization of a bench- mark ensures that the LLM was not trained on that benchmark. One hypothesis could be that the lack of memorization is correlated with the performance, even if the LLM was trained on the benchmark. Thus the LLM would not have any advantage with respect to another LLM that was not trained on the benchmark. This is currently speculation, so further research on this topic is necessary, given the extended use of closed LLMs in NLP research.
# 6 Call for action | 2310.18018#17 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 18 | # 6 Call for action
We want to encourage the NLP community to: (1) Develop auto- or semi-automatic measures to de- tect when data from a benchmark was exposed to a model; (2) Build a registry of data contamination cases, including the evidence for the contamination; (3) Encourage authors to use the previous tools to ensure that the experimental protocol avoids data contamination to the extent possible; and (4) Ad- dress data contamination issues during peer review, and, in the case of published works, devise mecha- nisms to flag those works with the relevant evidence of data contamination and how data contamination
affects the conclusions.
As the problem affects our entire field, we also want to encourage the community to participate in workshops related to this topic, as for example, the 1st Workshop on Data Contamination2. We think that developing the ideas that will arise from this community will play an important role in future NLP evaluations.
# 7 Limitations
In this paper, we address the problem of data con- tamination that occurs when evaluating LLMs on standard academic benchmarks. However, we are aware that there could exist other issues in current evaluations, but, they are out of the scope of this po- sition paper. Related to our proposed solutions, we are aware that these are early-stage solutions and that the proposed effort is really challenging, there- fore we call for further discussion and research on topics related to this issue. | 2310.18018#18 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 20 | # References
Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-david, Canwen Xu, Gun- jan Chhablani, Han Wang, Jason Fries, Maged Al- shaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir Radev, Mike Tian-jian Jiang, and Alexander Rush. 2022. Prompt- Source: An integrated development environment and repository for natural language prompts. In Proceed- ings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstra- tions, pages 93â104, Dublin, Ireland. Association for Computational Linguistics.
Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe. 2021. Multimodal datasets: misogyny, pornography, and malignant stereotypes.
2https://conda-workshop.github.io | 2310.18018#20 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 21 | 2https://conda-workshop.github.io
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners.
Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. 2023. Quantifying memorization across neural lan- guage models. In The Eleventh International Confer- ence on Learning Representations. | 2310.18018#21 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 24 | Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross- lingual sentence representations. In Proceedings of
the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 2475â2485, Brus- sels, Belgium. Association for Computational Lin- guistics. | 2310.18018#24 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 25 | the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 2475â2485, Brus- sels, Belgium. Association for Computational Lin- guistics.
Jesse Dodge, Maarten Sap, Ana Marasovi´c, William Agnew, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. 2021. Documenting large webtext corpora: A case study on the colos- In Proceedings of the sal clean crawled corpus. 2021 Conference on Empirical Methods in Natural Language Processing, pages 1286â1305, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma, Zong- wei Zhou, Tao Wang, Yu Emma Wang, Kellie Web- ster, Marie Pellat, Kevin Robinson, Kathy Meier- Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V. Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. 2021. Glam: Efficient scaling of language mod- els with mixture-of-experts. CoRR, abs/2112.06905. | 2310.18018#25 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |
2310.18018 | 26 | Aparna Elangovan, Jiayuan He, and Karin Verspoor. 2021. Memorization vs. generalization : Quantify- ing data leakage in NLP performance evaluation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 1325â1335, Online. Association for Computational Linguistics.
Besnik Fetahu, Sudipta Kar, Zhiyu Chen, Oleg Rokhlenko, and Shervin Malmasi. 2023. SemEval- 2023 Task 2: Fine-grained Multilingual Named En- tity Recognition (MultiCoNER 2). In Proceedings of the 17th International Workshop on Semantic Evalua- tion (SemEval-2023). Association for Computational Linguistics.
Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A. Smith. 2020. RealToxi- cityPrompts: Evaluating neural toxic degeneration in language models. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3356â3369, Online. Association for Computational Linguistics. | 2310.18018#26 | NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark | In this position paper, we argue that the classical evaluation on Natural
Language Processing (NLP) tasks using annotated benchmarks is in trouble. The
worst kind of data contamination happens when a Large Language Model (LLM) is
trained on the test split of a benchmark, and then evaluated in the same
benchmark. The extent of the problem is unknown, as it is not straightforward
to measure. Contamination causes an overestimation of the performance of a
contaminated model in a target benchmark and associated task with respect to
their non-contaminated counterparts. The consequences can be very harmful, with
wrong scientific conclusions being published while other correct ones are
discarded. This position paper defines different levels of data contamination
and argues for a community effort, including the development of automatic and
semi-automatic measures to detect when data from a benchmark was exposed to a
model, and suggestions for flagging papers with conclusions that are
compromised by data contamination. | http://arxiv.org/pdf/2310.18018 | Oscar Sainz, Jon Ander Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, Eneko Agirre | cs.CL | Accepted at EMNLP2024-Findings | null | cs.CL | 20231027 | 20231027 | [
{
"id": "2103.03874"
},
{
"id": "2212.09689"
},
{
"id": "2110.14168"
},
{
"id": "2212.10560"
}
] |