language:
- en
- zh
- vi
- id
- th
- fil
- ta
- ms
- km
- lo
- my
- jv
- su
license: llama3.1
library_name: transformers
pipeline_tag: text-generation
base_model: meta-llama/Llama-3.1-8B-Instruct
Llama3.1 8B CPT SEA-LIONv3
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
Llama3.1 8B CPT SEA-LIONv3 Base is a multilingual model which has undergone continued pre-training from Llama-3.1-8B-Instruct on English and Southeast Asian text.
SEA-LION stands for Southeast Asian Languages In One Network.
- Developed by: Products Pillar, AI Singapore
- Funded by: Singapore NRF
- Model type: Decoder
- Languages: English, Chinese, Vietnamese, Indonesian, Thai, Filipino, Tamil, Malay, Khmer, Lao, Burmese, Javanese, Sundanese
- License: Llama 3.1 Community License
Model Details
Model Description
The continued pre-training data for Llama3.1 8B CPT SEA-LIONv3 Base encompasses approximately 200B tokens.
For tokenisation, the model employs the default tokenizer used in Llama3.1 8B Instruct.
Benchmark Performance
We evaluated Llama3.1 8B CPT SEA-LIONv3 base model on general language capabilities.
General Language Capabilities
For the evaluation of general language capabilities, we employed the SEA HELM (also known as BHASA) evaluation benchmark across a variety of tasks. These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
Note: SEA HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance.
The evaluation was done five-shot with native prompts on a sample of 100-1000 instances for each dataset.
For more details on Llama3.1 8B CPT SEA-LIONv3 base benchmark performance, please refer to the SEA HELM leaderboard, https://leaderboard.sea-lion.ai/
Technical Specifications
Infrastructure
Llama3.1 8B CPT SEA-LIONv3 was trained using MosaicML Composer on the following hardware:
Training Details | Llama3.1 8B CPT SEA-LIONv3 |
---|---|
SingTel HGX-100 | 8+1 instances |
Nvidia H100 80GB GPU | 64+8 |
Training Duration | 10 days |
Configuration
HyperParameter | Llama3.1 8B CPT SEA-LIONv3 |
---|---|
Precision | bfloat16 |
Optimizer | decoupled_adamw |
Scheduler | weight_stable_decay |
Learning Rate | 1.0e-5 |
Global Batch Size | 512 |
Micro Batch Size | 1 |
Data
Llama3.1 8B CPT SEA-LIONv3 base model was continued pre-trained on 200B tokens of the following data:
Data Source | Unique Tokens (B) | Multiplier | Total Tokens (B) | Percentage (%) |
---|---|---|---|---|
StackV2 | 40.0 | 1 | 40.0 | 20.00 |
Wiki* + News* - English | 5.0 | 1 | 5.0 | 2.50 |
Fineweb-Edu | 7.5 | 1 | 7.5 | 3.75 |
Dolma Project Gutenberg | 5.0 | 1 | 5.0 | 2.50 |
Dolma arXiv | 1.7 | 1 | 1.7 | 0.83 |
Dolma StackExchange | 1.7 | 1 | 1.7 | 0.83 |
Dolma Semantic Scholar | 1.7 | 1 | 1.7 | 0.83 |
Dolma OpenWebMath | 2.5 | 1 | 2.5 | 1.25 |
Dolma Algebraic Stack | 2.5 | 1 | 2.5 | 1.25 |
Dolma Flan | 5.0 | 1 | 5.0 | 2.50 |
Dolma Reddit | 5.0 | 1 | 5.0 | 2.50 |
Dolma Megawika | 5.0 | 1 | 5.0 | 2.50 |
Dolma CC News | 7.5 | 1 | 7.5 | 3.75 |
Wiki* + News* - Chinese | 3.5 | 4 | 14.0 | 7.00 |
SEA-LION Pile - Chinese | 12.0 | 1 | 12.0 | 6.00 |
Wiki* + News* - Vietnamese | 2.4 | 4 | 9.4 | 4.70 |
VinBigData - Vietnamese | 2.1 | 4 | 8.2 | 4.10 |
SEA-LION Pile - Vietnamese | 8.4 | 1 | 8.4 | 4.20 |
Wiki* + News* - Indonesian | 1.3 | 4 | 5.2 | 2.60 |
SEA-LION Pile - Indonesian | 20.8 | 1 | 20.8 | 10.40 |
Wiki* + News* + WangChanBERTa - Thai | 1.3 | 4 | 5.2 | 2.60 |
SEA-LION Pile - Thai | 14.8 | 1 | 14.8 | 7.40 |
Wiki* + News - Filipino | 0.2 | 4 | 0.9 | 0.43 |
SEA-LION Pile - Filipino | 2.1 | 1 | 2.1 | 1.07 |
Wiki* + News - Tamil | 0.1 | 4 | 0.3 | 0.14 |
SEA-LION Pile - Tamil | 0.7 | 1 | 0.7 | 0.36 |
Wiki* + News - Malay | 0.1 | 4 | 0.6 | 0.29 |
SEA-LION Pile - Malay | 1.4 | 1 | 1.4 | 0.71 |
Wiki* + News - Khmer | 0.1 | 4 | 0.3 | 0.17 |
SEA-LION Pile - Khmer | 2.3 | 1 | 2.3 | 1.13 |
Wiki* + News - Lao | 0.0 | 4 | 0.1 | 0.03 |
SEA-LION Pile - Lao | 0.3 | 1 | 0.3 | 0.17 |
Wiki* + News - Burmese | 0.1 | 4 | 0.4 | 0.20 |
SEA-LION Pile - Burmese | 2.6 | 1 | 2.6 | 1.30 |
Note:
- All token counts are counted using Llama 3.1 8B Instruct tokenizer
- Wiki* sources includes Wikipedia, Wiki Books, Wiki Source, Wiki Voyage and Fandom Wiki
- News* sources includes VOA, Global Voices, MediaCorp, VinBigData-News
- Tamil news is sourced with permission from Seithi
Call for Contributions
We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions.
The Team
Chan Adwin, Choa Esther, Cheng Nicholas, Huang Yuli, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Teng Walter, Yeo Yeow Tong, Yong Xianbin
Acknowledgements
AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
Contact
For more info, please contact us using this SEA-LION Inquiry Form.
Link to SEA-LION's GitHub repository.
Disclaimer
This is the repository for the commercial instruction-tuned model. The model has not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
References
Thai Pre-Training Data Reference
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}