Edit model card

💨📟 Vikhr-Qwen-2.5-0.5B-Instruct

RU

Инструктивная модель на основе Qwen-2.5-0.5B-Instruct, обученная на русскоязычном датасете GrandMaster-PRO-MAX. В 4 раза эффективнее базовой модели, и идеально подходит для запуска на слабых мобильных устройствах.

EN

Instructive model based on Qwen-2.5-0.5B-Instruct, trained on the Russian-language dataset GrandMaster-PRO-MAX. It is 4 times more efficient than the base model, making it perfect for deployment on low-end mobile devices.

Рекомендуемая температура для генерации: 0.3 / Recommended generation temperature: 0.3.

Авторы / Authors

@article{nikolich2024vikhr,
  title={Vikhr: The Family of Open-Source Instruction-Tuned Large Language Models for Russian},
  author={Aleksandr Nikolich and Konstantin Korolev and Sergey Bratchikov and Nikolay Kompanets and Artem Shelmanov},
  journal={arXiv preprint arXiv:2405.13929},
  year={2024},
  url={https://arxiv.org/pdf/2405.13929}
}
Downloads last month
0
GGUF
Model size
494M params
Architecture
qwen2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Examples
Inference API (serverless) does not yet support llamacpp models for this pipeline type.

Dataset used to train Vikhrmodels/Vikhr-Qwen-2.5-0.5B-instruct-GGUF