Qwen2.5 Bakeneko 32B (rinna/qwen2.5-bakeneko-32b)
Overview
We conduct continual pre-training of Qwen/Qwen2.5-32B on 18B tokens from a mixture of Japanese and English datasets. The continual pre-training improves the model's performance on Japanese tasks.
The name bakeneko
comes from the Japanese word 化け猫/ばけねこ/Bakeneko
, which is a kind of Japanese mythical creature (妖怪/ようかい/Youkai
).
Size | Continual Pre-Training | Instruction-Tuning | DeepSeek-R1-Distilled |
---|---|---|---|
32B | Qwen2.5 Bakeneko 32B [HF] | Qwen2.5 Bakeneko 32B Instruct [HF][AWQ][GGUF][GPTQ int8][GPTQ int4] | DeepSeek R1 Distill Qwen2.5 Bakeneko 32B [HF][AWQ][GGUF][GPTQ int8][GPTQ int4] |
Library
The model was trained using code based on Lightning-AI/litgpt.
Model architecture
A 64-layer, 5120-hidden-size transformer-based language model. Please refer to the Qwen2.5 Technical Report for detailed information on the model's architecture.
Training
The model was initialized with the Qwen/Qwen2.5-32B model and continually trained on around 18B tokens from a mixture of the following corpora
- Japanese CC-100
- Japanese C4
- Japanese OSCAR
- The Pile
- Wikipedia
- rinna curated Japanese dataset
Contributors
Benchmarking
Please refer to rinna's LM benchmark page.
Tokenization
The model uses the original Qwen/Qwen2.5-32B tokenizer.
How to cite
@misc{rinna-qwen2.5-bakeneko-32b,
title = {rinna/qwen2.5-bakeneko-32b},
author = {Wakatsuki, Toshiaki and Chen, Xinqi and Sawada, Kei},
url = {https://huggingface.co/rinna/qwen2.5-bakeneko-32b}
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
pages = {13898--13905},
url = {https://aclanthology.org/2024.lrec-main.1213},
note = {\url{https://arxiv.org/abs/2404.01657}}
}
References
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title = {Qwen2 Technical Report},
author = {An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal = {arXiv preprint arXiv:2407.10671},
year = {2024}
}
@misc{litgpt-2023,
author = {Lightning AI},
title = {LitGPT},
howpublished = {\url{https://github.com/Lightning-AI/litgpt}},
year = {2023}
}
License
- Downloads last month
- 18