πŸ‡ΉπŸ‡­ OpenThaiGPT 1.0.0-alpha

OpenThaiGPT Version 1.0.0-alpha is the first Thai implementation of a 7B-parameter LLaMA v2 Chat model finetuned to follow Thai translated instructions below and makes use of the Huggingface LLaMA implementation.

---- Lora Adapter Format of OpenThaiGPT 1.0.0-alpha ----

Upgrade from OpenThaiGPT 0.1.0-beta

  • Using Facebook LLama v2 model 7b chat as a base model which is pretrained on over 2 trillion token.
  • Context Length is upgrade from 2048 token to 4096 token
  • Allow research and commerical use.a

Pretrain Model

Support

License

Source Code: License Apache Software License 2.0.
Weight: Research and Commercial uses.

Code and Weight

Colab Demo: https://colab.research.google.com/drive/1kDQidCtY9lDpk49i7P3JjLAcJM04lawu?usp=sharing
Finetune Code: https://github.com/OpenThaiGPT/openthaigpt-finetune-010beta
Inference Code: https://github.com/OpenThaiGPT/openthaigpt
Weight (Lora Adapter): https://huggingface.co/openthaigpt/openthaigpt-1.0.0-alpha-7b-chat
Weight (Huggingface Checkpoint): https://huggingface.co/openthaigpt/openthaigpt-1.0.0-alpha-7b-chat-ckpt-hf
Weight (GGML): https://huggingface.co/openthaigpt/openthaigpt-1.0.0-alpha-7b-chat-ggml
Weight (Quantized 4bit GGML): https://huggingface.co/openthaigpt/openthaigpt-1.0.0-alpha-7b-chat-ggml-q4

Sponsors

Pantip.com, ThaiSC

Powered by

OpenThaiGPT Volunteers, Artificial Intelligence Entrepreneur Association of Thailand (AIEAT), and Artificial Intelligence Association of Thailand (AIAT)

Authors

Disclaimer: Provided responses are not guaranteed.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train openthaigpt/openthaigpt-1.0.0-alpha-7b-chat