---
license: apache-2.0
pipeline_tag: text-generation
language:
- it
- en
tags:
- chat
- minerva-7b
- gguf
- instruct
- dpo
base_model:
- sapienzanlp/Minerva-7B-instruct-v1.0
library_name: transformers
---
# Model Card for Minerva-7B-instruct-v1.0 in GGUF Format
Minerva is the first family of **LLMs pretrained from scratch on Italian** developed by [Sapienza NLP](https://nlp.uniroma1.it)
in the context of the [Future Artificial Intelligence Research (FAIR)](https://fondazione-fair.it/) project, in collaboration with [CINECA](https://www.cineca.it/) and with additional contributions from [Babelscape](https://babelscape.com) and the [CREATIVE](https://nlp.uniroma1.it/creative/) PRIN Project.
Notably, the Minerva models are truly-open (data and model) Italian-English LLMs, with approximately half of the pretraining data
including Italian text. The full tech is available at [https://nlp.uniroma1.it/minerva/blog/2024/11/26/tech-report](https://nlp.uniroma1.it/minerva/blog/2024/11/26/tech-report).
## Description
This is the model card for the GGUF conversion of [**Minerva-7B-instruct-v1.0**](https://huggingface.co/sapienzanlp/Minerva-7B-instruct-v1.0), a 7 billion parameter model trained on almost 2.5 trillion tokens (1.14 trillion in Italian,
1.14 trillion in English and 200 billion in code). This repository contains the model weights in float32 and float16 formats, as well as quantized versions in 8-bit, 6-bit, and 4-bit precision.
**Important**: This model is compatible with [llama.cpp](https://github.com/ggerganov/llama.cpp) updated to at least commit `6fe624783166e7355cec915de0094e63cd3558eb` (5 November 2024).