File size: 18,078 Bytes
3089914 4004d09 3089914 4e2040a e55a3a6 3089914 aed6c9d 3089914 fce02b3 8d22ee2 fce02b3 bb0e62d fce02b3 bb0e62d 3089914 8d22ee2 3089914 fdb58fd 3089914 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 |
---
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
license:
- llama3.1
- gemma
model_type: llama
datasets:
- lmsys/lmsys-chat-1m
- tokyotech-llm/lmsys-chat-1m-synth
- argilla/magpie-ultra-v0.1
---
# Llama 3.1 Swallow - Built with Llama
Llama 3.1 Swallow is a series of large language models (8B, 70B) that were built by continual pre-training on the [Meta Llama 3.1](https://huggingface.co/collections/meta-llama/llama-31-669fc079a0c406a149a5738f) models.
Llama 3.1 Swallow enhanced the Japanese language capabilities of the original Llama 3.1 while retaining the English language capabilities.
We use approximately 200 billion tokens that were sampled from a large Japanese web corpus (Swallow Corpus Version 2), Japanese and English Wikipedia articles, and mathematical and
coding contents, etc (see the Training Datasets section of the base model) for continual pre-training.
The instruction-tuned models (Instruct) were built by supervised fine-tuning (SFT) on the synthetic data specially built for Japanese.
See the Swallow Model Index section to find other model variants.
**Note**: [Llama-3.1-Swallow-70B-Instruct-v0.3](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.3) is an instruction-tuned version of [Llama-3.1-Swallow-70B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1) with our instruction datasets.
# Release History
- **December 30, 2024**: Released [Llama-3.1-Swallow-70B-Instruct-v0.3](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.3).
- **December 23, 2024**: Released [Llama-3.1-Swallow-8B-Instruct-v0.3](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3).
- **November 11, 2024**: Released [Llama-3.1-Swallow-8B-v0.2](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.2) and [Llama-3.1-Swallow-8B-Instruct-v0.2](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2).
- **October 08, 2024**: Released [Llama-3.1-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.1), [Llama-3.1-Swallow-8B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.1), [Llama-3.1-Swallow-70B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1), and [Llama-3.1-Swallow-70B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1).
# Major Updates
This release enhances the conversation capability of Llama 3.1 Swallow.
The updated model, Llama-3.1-Swallow-70B-Instruct-v0.3 generates helpful and detailed responses based on user instructions and conversation history.
Llama-3.1-Swallow-70B-Instruct-v0.3 outperforms its predecessor, [Llama-3.1-Swallow-70B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1), by 5.68 points on Japanese MT-Bench.
## Swallow Model Index
|Model|Llama-3.1-Swallow v0.1|Llama-3.1-Swallow-Instruct v0.1|Llama-3.1-Swallow v0.2|Llama-3.1-Swallow-Instruct v0.2|Llama-3.1-Swallow-Instruct v0.3|
|---|---|---|---|---|---|
|8B| [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.1) | [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.1) | [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-v0.2) | [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2) | [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.3)
|70B| [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-v0.1) | [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.1) | | | [Link](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.3)
![logo](./logo.png)
The website [https://swallow-llm.github.io/](https://swallow-llm.github.io/) provides large language models developed by the Swallow team.
## Model Details
* **Model type**: Please refer to [Llama 3.1 MODEL_CARD](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) for details on the model architecture.
* **Language(s)**: Japanese English
* **Library**: [Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
* **Tokenizer**: Please refer to [Llama 3.1 blog](https://ai.meta.com/blog/meta-llama-3-1) for details on the tokenizer.
* **Contact**: swallow[at]nlp.c.titech.ac.jp
## Model Performance
## MT-Bench JA
|Model|coding|extraction|humanities|math|reasoning|roleplay|stem|writing|JMTAvg|
|---|---|---|---|---|---|---|---|---|---|
| Llama 3 Youko 70B Instruct | 0.6632| 0.8387| 0.8108| 0.4655| 0.7013| 0.7778| 0.7544| 0.7662| 0.7222|
| Llama-3.1-70B-Japanese-Instruct-2407 | 0.6267| 0.7525| 0.7938| 0.5750| 0.5590| 0.7725| 0.7240| 0.7180| 0.6902|
| Llama 3 heron brain 70B v0.3 | 0.3762| 0.7892| 0.7274| 0.5589| 0.5070| 0.6662| 0.6880| 0.6996| 0.6266|
| Llama 3 70B Instruct |0.5969| 0.8410| 0.7120| 0.4481| 0.4884| 0.7117| 0.6510| 0.6900| 0.6424|
| Llama 3.1 70B Instruct | 0.5252| 0.7846| 0.7086| 0.5063| 0.6979| 0.6888| 0.6402| 0.6653| 0.6521|
| Llama 3.3 70B Instruct | 0.5193| 0.7750| 0.7213| 0.5228| 0.6721| 0.7407| 0.6386| 0.7043| 0.6618|
| Llama 3.1 Swallow 70B Instruct v0.1| 0.5676| 0.7859| 0.7490| 0.5437| 0.6383| 0.6870| 0.6121| 0.6540| 0.6547|
| **Llama 3.1 Swallow 70B Instruct v0.3** | 0.6063| 0.8052| 0.8410| 0.5591| 0.6280| 0.7774| 0.6920| 0.7832| 0.7115|
| Qwen2-72B-Instruct |0.5699| 0.7858| 0.8222| 0.5096| **0.7032**| 0.7963| 0.7728| **0.8223**| 0.7228|
| Qwen2.5-72B-Instruct |0.7060| 0.7866| 0.8122| 0.6968| 0.6536| **0.8301**| 0.8060| 0.7841| 0.7594|
| GPT-3.5 (gpt-3.5-turbo-0125) | 0.6851|0.7641| 0.7414| 0.5522| 0.5128| 0.7104| 0.6266| 0.7361| 0.6661|
| GPT-4o (gpt-4o-2024-05-13) | **0.7296**| **0.8540**| **0.8646**| **0.6641**| 0.6661| 0.8274| **0.8184**| 0.8085| **0.7791**|
### Japanese tasks
|Model|JCom.|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|JMMLU|JHumanEval|Ja Avg|
|---|---|---|---|---|---|---|---|---|---|---|---|
| |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|5-shot|0-shot| |
| |EM acc|Char-F1|Char-F1|Char-F1|ROUGE-2|EM acc|BLEU|BLEU|EM acc|pass@1| |
| Llama 3 Youko 70B Instruct | 0.9526| 0.6252| 0.5853| 0.9215| 0.1983| 0.7400| 0.2633| 0.2245| 0.7170| 0.6098| 0.5838|
| Llama-3.1-70B-Japanese-Instruct-2407 |0.9562| 0.6466| 0.6602| 0.9187| 0.1564| 0.7480| 0.2901| 0.2410| 0.7227| 0.6274| 0.5967|
| Llama 3 heron brain 70B v0.3 |0.9660| 0.6643| 0.6817| 0.9221| 0.2611| 0.7720| 0.3093| 0.2578| 0.7077| 0.6079| **0.6150**|
| Llama 3 70B Instruct |0.9419| 0.6114| 0.5506| 0.9164| 0.1912| 0.7200| 0.2708| 0.2350| 0.6789| 0.6610| 0.5777|
| Llama 3.1 70B Instruct |0.9482| 0.6246| 0.5781| 0.9201| 0.1772| 0.7440| 0.2805| 0.2472| 0.7323| 0.6933| 0.5945|
| Llama 3.3 70B Instruct |0.9410| 0.6399| 0.5728| 0.8927| 0.1787| 0.7840| 0.2779| 0.2429| 0.7340| 0.7439| 0.6008|
| Llama 3.1 Swallow 70B Instruct v0.1 |0.9598| 0.6192| 0.6605| 0.9235| 0.1938| 0.7760| 0.3123| 0.2593| 0.7117| 0.4713| 0.5887|
| **Llama 3.1 Swallow 70B Instruct v0.3** |0.9651| 0.6322| 0.6532| 0.9107| 0.1951| 0.7520| 0.3053| 0.2580| 0.6896| 0.6006| 0.5962|
| Qwen2-72B-Instruct |0.9634| 0.6268| 0.5418| 0.9210| 0.1644| 0.7840| 0.2592| 0.2327| 0.7713| 0.6909| 0.5955|
| Qwen2.5-72B-Instruct |0.9696| 0.5699| 0.5811| 0.7381| 0.1706| 0.8360| 0.2269| 0.2179| 0.7899| 0.6256| 0.5726|
### English tasks
|Model|OpenBookQA|TriviaQA|HellaSWAG|SQuAD2.0|XWINO|MMLU|GSM8K|BBH|HumanEval|En Avg|
|---|---|---|---|---|---|---|---|---|---|---|
| |4-shot|4-shot|4-shot|4-shot|4-shot|5-shot|4-shot|3-shot|0-shot| |
| |Acc|EM acc|Acc|EM acc|Acc|Acc|EM acc|CoT EM Acc|pass@1| |
| Llama 3 Youko 70B Instruct | 0.4500| 0.7973| 0.6863| 0.3914| 0.9153| 0.8055| 0.8923| 0.7814| 0.6598| 0.7088|
| Llama-3.1-70B-Japanese-Instruct-2407| 0.4220| 0.8104| 0.6481| 0.3744| 0.9170| 0.8071| 0.8893| 0.8228| 0.7463| 0.7153|
| Llama 3 heron brain 70B v0.3| 0.4460 |0.8107 |0.6682| 0.4085| 0.9174| 0.7898| 0.8772| 0.7586| 0.6713| 0.7053|
| Llama 3 70B Instruct |0.4400| 0.7999| 0.6552| 0.4024| 0.9127| 0.7992| 0.9052| 0.8326| 0.7555| 0.7225|
| Llama 3.1 70B Instruct |0.4300| 0.8212| 0.6621| 0.3921| 0.9157| 0.8213| 0.8764| 0.8390| 0.7915| 0.7277|
| Llama 3.3 70B Instruct |0.4260| 0.8172| 0.6674| 0.3933| 0.9174| 0.8240| 0.8901| 0.8529| 0.8341| **0.7358**|
| Llama 3.1 Swallow 70B Instruct v0.1 |0.4520| 0.8148| 0.6834| 0.4012| 0.9157| 0.7855| 0.8886| 0.8486| 0.5823| 0.7080|
| **Llama 3.1 Swallow 70B Instruct v0.3** |0.4540| 0.8245| 0.6915| 0.4082| 0.9187| 0.7770| 0.8726| 0.8148| 0.6378| 0.7110|
| Qwen2-72B-Instruct |0.4360| 0.7588| 0.6857| 0.3913| 0.9110| 0.8391| 0.8499| 0.2436| 0.6939| 0.6455|
| Qwen2.5-72B-Instruct |0.4540| 0.6764| 0.7064| 0.3550| 0.8895| 0.8478| 0.9113| 0.4027| 0.6165| 0.6511|
## Evaluation Benchmarks
### MT-Bench JA
We used [Japanese MT-Bench](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_question) to assess the capabilities of multi-turn dialogue with the following settings:
- Implementation: FastChat [Zheng+, 2023] (commit #e86e70d0)
- Question: [Nejumi LLM-Leaderboard NEO, mtbench_ja_question_v3](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_question/v3)
- Reference Answer: [Nejumi LLM-Leaderboard NEO, mtbench_ja_referenceanswer_v1](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_referenceanswer/v1)
- Prompt for Judge: [Nejumi LLM-Leaderboard NEO, mtbench_ja_prompt_v1](https://wandb.ai/wandb-japan/llm-leaderboard/artifacts/dataset/mtbench_ja_prompt/v1)
- Judge: `gpt-4-1106-preview`
- Scoring: Absolute scale normalized to a 0-1 range, averaged over five runs.
-
### Japanese evaluation benchmarks
We used llm-jp-eval(v1.3.0), JP Language Model Evaluation Harness(commit #9b42d41) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
- Multiple-choice question answering (JCommonsenseQA [Kurihara et al., 2022])
- Open-ended question answering (JEMHopQA [Ishii et al., 2024])
- Open-ended question answering (NIILC [関根, 2003])
- Machine reading comprehension (JSQuAD [Kurihara et al., 2022])
- Automatic summarization (XL-Sum [Hasan et al., 2021])
- Machine translation (WMT2020 ja-en [Barrault et al., 2020])
- Machine translation (WMT2020 en-ja [Barrault et al., 2020])
- Mathematical reasoning (MGSM [Shi et al., 2023])
- Academic exams (JMMLU [尹ら, 2024])
- Code generation (JHumanEval [佐藤ら, 2024])
### English evaluation benchmarks
We used the Language Model Evaluation Harness(v.0.4.2) and Code Generation LM Evaluation Harness(commit #0261c52). The details are as follows:
- Multiple-choice question answering (OpenBookQA [Mihaylov et al., 2018])
- Open-ended question answering (TriviaQA [Joshi et al., 2017])
- Machine reading comprehension (SQuAD2 [Rajpurkar et al., 2018])
- Commonsense reasoning (XWINO [Tikhonov and Ryabinin, 2021])
- Natural language inference (HellaSwag [Zellers et al., 2019])
- Mathematical reasoning (GSM8K [Cobbe et al., 2021])
- Reasoning (BBH (BIG-Bench-Hard) [Suzgun et al., 2023])
- Academic exams (MMLU [Hendrycks et al., 2021])
- Code generation (HumanEval [Chen et al., 2021])
## Usage
```sh
pip install vllm
```
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_name = "tokyotech-llm/Llama-3.1-Swallow-70B-Instruct-v0.3"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(
model=model_name,
tensor_parallel_size=1,
)
sampling_params = SamplingParams(
temperature=0.6, top_p=0.9, max_tokens=512, stop="<|eot_id|>"
)
message = [
{"role": "system", "content": "あなたは誠実で優秀な日本人のアシスタントです。"},
{
"role": "user",
"content": "東京の紅葉した公園で、東京タワーと高層ビルを背景に、空を舞うツバメと草地に佇むラマが出会う温かな物語を書いてください。",
},
]
prompt = tokenizer.apply_chat_template(
message, tokenize=False, add_generation_prompt=True
)
output = llm.generate(prompt, sampling_params)
print(output[0].outputs[0].text)
```
## Training Datasets
### Instruction Tuning
The following datasets were used for the instruction tuning.
- `lmsys-chat-1m-synth-gemma2-2turns-ja-wo-pii-and-template-instructions`
- Multi-turn Japanese instruction dataset synthesized and derived from [lmsys-chat-1m](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) [\[Zhang+, ICLR24\]](https://openreview.net/forum?id=BOfDKxfwt0)).
- First-turn user instructions were translated into Japanese via DeepL (machine translation), and assistant responses were generated using [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it). The same model, i.e., [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it) served as a judge for rejection sampling (n=6).
- Second-turn user instructions and responses were synthesized using [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it). The same model scores the quality of the second-turn response with a range of 1-10. Second-turn responses with scores lower than 9 were rejected, along with their corresponding instructions.
Conversations containing personally identifiable information (PII) and template-based user instructions were removed. Duplicate instructions were removed.
- The dataset will be available at [tokyotech-llm/lmsys-chat-1m-synth](https://huggingface.co/datasets/tokyotech-llm/lmsys-chat-1m-synth).
- `filtered-magpie-ultra-ja`
- A Japanese variant of the `filtered-magpie-ultra-en` dataset, translated into Japanese by [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it).
- `gemma-magpie`
- A Japanese synthetic Q&A dataset from scratch, generated by [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it). User instructions were created with prompts specific to each topic, and assistant responses were generated for these instructions.
- The conversations were heuristically filtered for quality and length. Then, [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it) was applied to score the quality of each of the conversation with a range of 1-10. Conversations with scores <= 7 were rejected.
## Risks and Limitations
The models released here are still in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.
## Acknowledgements
We thank Meta Research for releasing Llama 3.1 under a generous open license.
We received various supports, including:
+ AIST project: "Research and Development of Foundation Models for Generative AI in the Physical Domain"
+ NEDO project: "Development of Artificial Intelligence Application Technology to Support Judgment in Design Risk Assessment Work Based on the Perspective of Skilled Persons" (JPNP18002) of "Development of Integration Technology as the Core of Next Generation Artificial Intelligence and Robotics"
+ MEXT project: "Formation of R&D center to ensure transparency and reliability of generative AI models"
+ AIST program: [Large Generative AI Development Support Program](https://abci.ai/en/link/lfm_support_program.html)
## License
[META LLAMA 3.1 COMMUNITY LICENSE](https://www.llama.com/llama3_1/license/) and [Gemma Terms of Use](https://ai.google.dev/gemma/terms)
## Authors
Here are the team members:
- From [Tokyo Institute of Technology Okazaki Laboratory](https://www.nlp.c.titech.ac.jp/index.en.html), the following members:
- [Naoaki Okazaki](https://www.chokkan.org/index.ja.html)
- [Sakae Mizuki](https://s-mizuki-nlp.github.io/)
- [Youmi Ma](https://www.nlp.c.titech.ac.jp/member/youmi.en.html)
- [Koki Maeda](https://sites.google.com/view/silviase)
- [Kakeru Hattori](https://aya-se.vercel.app/)
- [Masanari Ohi](https://sites.google.com/view/masanariohi)
- [Hinari Shimada](https://hinarishimada.github.io/portfolio)
- [Taihei Shiotani](https://github.com/inatoihs)
- [Koshiro Saito](https://sites.google.com/view/koshiro-saito)
- From [Tokyo Institute of Technology YOKOTA Laboratory](https://www.rio.gsic.titech.ac.jp/en/index.html), the following members:
- [Rio Yokota](https://twitter.com/rioyokota)
- [Kazuki Fujii](https://twitter.com/okoge_kaz)
- [Taishi Nakamura](https://twitter.com/Setuna7777_2)
- [Takumi Okamoto](https://www.linkedin.com/in/takumi-okamoto)
- [Ishida Shigeki](https://www.wantedly.com/id/reborn27)
- From [Artificial Intelligence Research Center, AIST, Japan](https://www.airc.aist.go.jp/en/teams/), the following members:
- [Hiroya Takamura](https://sites.google.com/view/hjtakamura)
## How to cite
If you find our work helpful, please feel free to cite these papers.
```
@inproceedings{Fujii:COLM2024,
title={Continual Pre-Training for Cross-Lingual LLM Adaptation:
Enhancing Japanese Language Capabilities},
author={Kazuki Fujii and Taishi Nakamura and Mengsay Loem and Hiroki
Iida and Masanari Ohi and Kakeru Hattori and Hirai Shota and Sakae
Mizuki and Rio Yokota and Naoaki Okazaki},
booktitle="Proceedings of the First Conference on Language Modeling",
series={COLM},
pages="(to appear)",
year="2024",
month=oct,
address={University of Pennsylvania, USA},
}
@inproceedings{Okazaki:COLM2024,
title={Building a Large Japanese Web Corpus for Large Language Models},
author={Naoaki Okazaki and Kakeru Hattori and Hirai Shota and Hiroki
Iida and Masanari Ohi and Kazuki Fujii and Taishi Nakamura and Mengsay
Loem and Rio Yokota and Sakae Mizuki},
booktitle="Proceedings of the First Conference on Language Modeling",
series={COLM},
pages="(to appear)",
year="2024",
month=oct,
address={University of Pennsylvania, USA},
}
```
### References
```tex
@misc{dubey2024llama3herdmodels,
title={The Llama 3 Herd of Models},
author={Abhimanyu Dubey and Abhinav Jauhri and Abhinav Pandey and Abhishek Kadian and Ahmad Al-Dahle and Aiesha Letman and Akhil Mathur and Alan Schelten and Amy Yang and Angela Fan et al.},
year={2024},
eprint={2407.21783},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2407.21783},
}
``` |