--- base_model: - aisingapore/llama3.1-8b-cpt-sea-lionv3-base language: - en - zh - vi - id - th - fil - ta - ms - km - lo - my - jv - su library_name: transformers license: llama3.1 pipeline_tag: text-generation --- # Llama3.1 8B CPT SEA-LIONv3 Instruct SEA-LION is a collection of Large Language Models (LLMs) which have been pretrained and instruct-tuned for the Southeast Asia (SEA) region. Llama3.1 8B CPT SEA-LIONv3 Instruct is a multilingual model that has been fine-tuned in two stages on approximately **12.3M English instruction-completion pairs** alongside a pool of **4.5M Southeast Asian instruction-completion pairs** from SEA languages such as Indonesian, Thai, Vietnamese and Tamil. SEA-LION stands for _Southeast Asian Languages In One Network_. - **Developed by:** Products Pillar, AI Singapore - **Funded by:** Singapore NRF - **Model type:** Decoder - **Languages supported:** English, Chinese, Vietnamese, Indonesian, Thai, Filipino, Tamil, Malay, Khmer, Lao, Burmese, Javanese, Sundanese - **License:** [Llama 3.1 Community License](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct/blob/main/LICENSE) ## Description This repo contains `GGUF` format model files for [aisingapore/llama3.1-8b-cpt-sea-lionv3-instruct](https://huggingface.co/aisingapore/llama3.1-8b-cpt-sea-lionv3-instruct). #### Model Weights Included in this repository: - [llama3.1-8b-cpt-sea-lionv3-instruct-Q2_K](https://huggingface.co/aisingapore/llama3.1-8b-cpt-sea-lionv3-instruct-gguf/blob/main/llama3.1-8B-cpt-sea-lionv3-instruct-Q2_K.gguf) - [llama3.1-8b-cpt-sea-lionv3-instruct-Q3_K_M](https://huggingface.co/aisingapore/llama3.1-8b-cpt-sea-lionv3-instruct-gguf/blob/main/llama3.1-8B-cpt-sea-lionv3-instruct-Q3_K_M.gguf) - [llama3.1-8b-cpt-sea-lionv3-instruct-Q4_0](https://huggingface.co/aisingapore/llama3.1-8b-cpt-sea-lionv3-instruct-gguf/blob/main/llama3.1-8B-cpt-sea-lionv3-instruct-Q4_0.gguf) - [llama3.1-8b-cpt-sea-lionv3-instruct-Q4_K_M](https://huggingface.co/aisingapore/llama3.1-8b-cpt-sea-lionv3-instruct-gguf/blob/main/llama3.1-8B-cpt-sea-lionv3-instruct-Q4_K_M.gguf) - [llama3.1-8b-cpt-sea-lionv3-instruct-Q5_0](https://huggingface.co/aisingapore/llama3.1-8b-cpt-sea-lionv3-instruct-gguf/blob/main/llama3.1-8B-cpt-sea-lionv3-instruct-Q5_0.gguf) - [llama3.1-8b-cpt-sea-lionv3-instruct-Q5_K_M](https://huggingface.co/aisingapore/llama3.1-8b-cpt-sea-lionv3-instruct-gguf/blob/main/llama3.1-8B-cpt-sea-lionv3-instruct-Q5_K_M.gguf) - [llama3.1-8b-cpt-sea-lionv3-instruct-Q6_K](https://huggingface.co/aisingapore/llama3.1-8b-cpt-sea-lionv3-instruct-gguf/blob/main/llama3.1-8B-cpt-sea-lionv3-instruct-Q6_K.gguf) - [llama3.1-8b-cpt-sea-lionv3-instruct-Q8_0](https://huggingface.co/aisingapore/llama3.1-8b-cpt-sea-lionv3-instruct-gguf/blob/main/llama3.1-8B-cpt-sea-lionv3-instruct-Q8_0.gguf) ### Caveats It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning. ## Limitations ### Safety Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes. ## Technical Specifications ### Fine-Tuning Details Llama3.1 8B CPT SEA-LIONv3 Instruct was tuned using a combination of a full parameter fine-tune, on-policy alignment, and model merges of the best performing checkpoints. The training process for fine-tuning was approximately 1024 GPU hours, on a single node of 8x H100-80GB GPUs. ## Data Llama3.1 8B CPT SEA-LIONv3 Instruct was trained on a wide range of synthetic instructions, alongside publicly available instructions hand-curated by the team with the assistance of native speakers. In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source. ## Call for Contributions We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions. ## The Team Chan Adwin, Choa Esther, Cheng Nicholas, Huang Yuli, Hulagadri Adithya Venkatadri, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Teng Walter, Yeo Yeow Tong, Yong Xianbin ## Acknowledgements [AI Singapore](​​https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore. ## Contact For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6) [Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion) ## Disclaimer This is the repository for the commercial instruction-tuned model. The model has _not_ been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.