--- license: mit language: - it - en library_name: transformers tags: - sft - it - mistral - chatml --- # Model Information Azzurro-Quantized is a compact iteration of the model [Azzurro](https://huggingface.co/MoxoffSpA/Azzurro), optimized for efficiency. It is offered in two distinct configurations: a 4-bit version and an 8-bit version, each designed to maintain the model's effectiveness while significantly reducing its size and computational requirements. - It's trained both on publicly available datasets, like [SQUAD-it](https://huggingface.co/datasets/squad_it), and datasets we've created in-house. - it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness. - It is quantized in a 4-bit version and an 8-bit version following the procedure [here](https://github.com/ggerganov/llama.cpp). # Evaluation We evaluated the model using the same test sets as used for the [Open Ita LLM Leaderboard](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard): | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average | |:----------------------| :--------------- | :-------------------- | :------- | | 0.6067 | 0.4405 | 0.5112 | 0,52 | ## Usage You need to download the .gguf model first ```python pip install llama-cpp-python ``` ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="/path/to/model.gguf", # Download the model file first n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example question = """Quanto è alta la torre di Pisa?""" context = """ La Torre di Pisa è un campanile del XII secolo, famoso per la sua inclinazione. Alta circa 56 metri. """ prompt = f"Domanda: {question}, contesto: {context}" output = llm( f"[INST] {prompt} [/INST]", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["[INST]"], # Example stop token echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="/path/to/model.gguf", chat_format="mistral-instruct") response = llm.create_chat_completion( messages = [ {"role": "user", "content": prompt}, ] ) assistant_message = response['choices'][0]['message']['content'] print(assistant_message) ``` ## Bias, Risks and Limitations Azzurro-Quantized and its original model [Azzurro](https://huggingface.co/MoxoffSpA/Azzurro) have not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus were used to train the base model [mistralai/Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-v0.2), however, it is likely to have included a mix of Web data and technical sources like books and code. ## Links to resources - SQUAD-it dataset: https://huggingface.co/datasets/squad_it - Mistral_7B_v0.2 original weights: https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar - Mistral_7B_v0.2 model: https://huggingface.co/alpindale/Mistral-7B-v0.2-hf - Open Ita LLM Leaderbord: https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard ## Base version We have the not quantized version here: https://huggingface.co/MoxoffSpA/Azzurro ## The Moxoff Team Jacopo Abate, Marco D'Ambra, Luigi Simeone, Gianpaolo Francesco Trotta