--- base_model: unsloth/Llama-3.2-3B-Instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf - luau - roblox - code generation license: llama3.2 language: - en pipeline_tag: text-generation --- ![BY_PINKSTACK.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/2xMulpuSlZ3C1vpGgsAYi.png) This is a lightweight, on-device-ready AI model meant for general use. It is great at creating Roblox scripts. not related to the Roblox corporation in any way. # 🤖 Which quant is right for you? - ***Q4:*** This model should be used on edge devices like phones or older laptops due to its compact size, quality is okay but fully usable. - ***Q5:*** This model should be used on most medium range devices like a gtx 1080, good quality and fast responses. - ***Q8:*** This model should be used on most modern high end devices like an rtx 3060 ti, Responses are very high quality, but its slower than q5. ## Things you should be aware of when using PGAM models (Pinkstack General Accuracy Models) 🤖 This PGAM is based on Meta Llama 3.2 3B which we've given extra roblox LuaU training parameters so it would have similar outputs to the roblox ai documentation assistant, We trained using [this](mahiatlinux/luau_corpus-ShareGPT-for-EDM) dataset. Which is based on Roblox/luau_corpus To use this model, you must use a service which supports the GGUF file format. Additionaly, it uses the llama-3 template. Highly recommended to use with a system prompt. # Extra information - **Developed by:** Pinkstack - **License:** llama 3.2 community license - **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct-bnb-4bit This model was trained using [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. This model is not affiliated in any way with the Roblox corporation. Used this model? Don't forget to leave a like :) 💖