YugoGPT Instruct
YugoGPT Instruct is a fine-tuned version of the YugoGPT base model designed specifically for translation tasks involving Serbian, Croatian, and Bosnian languages. Unlike the base model, this instruct model is optimized for following user instructions, offering improved performance in instruction-based interactions.
Overview
YugoGPT Instruct builds upon the powerful capabilities of the YugoGPT base model, fine-tuning it to enhance its usability in structured and directive tasks. This model is ideal for translation workflows where accuracy and context preservation are critical.
Features
- Specialized for BCS Languages: Tailored for Serbian, Croatian, and Bosnian language translations.
- Instruction Following: Fine-tuned to better adhere to user-provided instructions.
- Flexible Deployment: Compatible with various quantization formats for different computational environments.
Quantization Formats
A variety of quantization formats are available to suit diverse performance and resource requirements. Below is the table of quantization options:
Filename | Quant Type | Description |
---|---|---|
YugoGPT-7B-Instruct-F16 |
F16 | Full F16 precision, maximum quality. |
YugoGPT-7B-Instruct-Q8_0 |
Q8_0 | Extremely high quality. |
YugoGPT-7B-Instruct-Q6_K |
Q6_K | Very high quality, near perfect, recommended. |
YugoGPT-7B-Instruct-Q5_K_M |
Q5_K_M | High quality, recommended. |
YugoGPT-7B-Instruct-Q5_K_S |
Q5_K_S | High quality with optimal trade-offs. |
YugoGPT-7B-Instruct-Q4_K_M |
Q4_K_M | Good quality, optimized for speed. |
YugoGPT-7B-Instruct-Q4_K_S |
Q4_K_S | Slightly lower quality with more savings. |
YugoGPT-7B-Instruct-Q3_K_L |
Q3_K_L | Lower quality, good for low RAM systems. |
YugoGPT-7B-Instruct-Q3_K_M |
Q3_K_M | Low quality, optimized for size. |
YugoGPT-7B-Instruct-Q3_K_S |
Q3_K_S | Low quality, not recommended. |
Usage
For usage with Ollama, you can initialize the model using the provided modelfile
in the repository. Follow Ollama’s setup instructions to get started.
Replace {__FILE_LOCATION__}
with the file name of the quant you want to use when creating the model using Ollama CLI.
Licensing
This model is released under the Apache 2.0 License, the same as the YugoGPT base repository.
Credits
- Base Model: YugoGPT by Aleksa Gordić
- Fine-Tuning Framework: Unsloth
- Downloads last month
- 167
Model tree for W4D/YugoGPT-7B-Instruct-GGUF
Base model
gordicaleksa/YugoGPT