File size: 4,008 Bytes
e760493
f39ddc8
4ec90e6
 
 
 
 
 
 
 
 
3b9cda8
4ec90e6
 
 
b118fbe
318249c
ba45ffb
fa7a860
4ec90e6
4615aa9
4ec90e6
7cec443
ba45ffb
4ec90e6
 
4615aa9
4ec90e6
af83552
 
 
4ec90e6
 
 
7cec443
4ec90e6
a66273a
 
7cec443
400e0f5
7cec443
4ec90e6
a66273a
a2ee4eb
 
400e0f5
a2ee4eb
 
9e61e55
a66273a
7cec443
e11ca4e
7cec443
 
e11ca4e
 
 
 
 
7cec443
 
e11ca4e
7cec443
 
9e61e55
7cec443
 
 
3db21aa
 
 
 
 
 
adc93ab
7cec443
adc93ab
a66273a
cb0600b
 
 
 
7cec443
 
 
 
a66273a
4ec90e6
 
 
 
99c461a
4ec90e6
7cec443
4ec90e6
 
 
 
2992bd0
 
 
 
4ec90e6
 
 
 
fb2fbd7
4ec90e6
 
 
b90a335
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
license: mit
language:
- it
- en
library_name: transformers
tags:
- sft
- it
- mistral
- chatml
---

# Model Information

AzzurroQuantized is a compact iteration of the model [Azzurro](https://huggingface.co/MoxoffSpA/Azzurro), optimized for efficiency.

It is offered in two distinct configurations: a 4-bit version and an 8-bit version, each designed to maintain the model's effectiveness while significantly reducing its size
and computational requirements.

- It's trained both on publicly available datasets, like [SQUAD-it](https://huggingface.co/datasets/squad_it), and datasets we've created in-house.
- it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
- It is quantized in a 4-bit version and an 8-bit version following the procedure [here](https://github.com/ggerganov/llama.cpp).

# Evaluation

We evaluated the model using the same test sets as used for the [Open Ita LLM Leaderboard](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard):

| hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average |
|:----------------------| :--------------- | :-------------------- | :------- |
| 0.6067 | 0.4405 | 0.5112 | 0,52 |

## Usage

You need to download the .gguf model first 

If you want to use the cpu install these dependencies:

```python
pip install llama-cpp-python huggingface_hub
```

If you want to use the gpu instead:

```python
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install huggingface_hub llama-cpp-python --force-reinstall --upgrade --no-cache-dir
```

And then use this code to see a response to the prompt. 

```python
from huggingface_hub import hf_hub_download
from llama_cpp import Llama

model_path = hf_hub_download(
    repo_id="MoxoffSpA/AzzurroQuantized",
    filename="Azzurro-ggml-Q4_K_M.gguf"
)

# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
  model_path=model_path,
  n_ctx=2048,  # The max sequence length to use - note that longer sequence lengths require much more resources
  n_threads=8,            # The number of CPU threads to use, tailor to your system and the resulting performance
  n_gpu_layers=0         # The number of layers to offload to GPU, if you have GPU acceleration available
)

# Simple inference example
question = """Quanto è alta la torre di Pisa?"""
context = """
La Torre di Pisa è un campanile del XII secolo, famoso per la sua inclinazione. Alta circa 56 metri.
"""

prompt = f"Domanda: {question}, contesto: {context}"

output = llm(
  f"[INST] {prompt} [/INST]", # Prompt
  max_tokens=128,
  stop=["\n"],   
  echo=True,
  temperature=0.1,
  top_p=0.95
)

# Chat Completion API

print(output['choices'][0]['text'])
```

## Bias, Risks and Limitations

AzzurroQuantized and its original model [Azzurro](https://huggingface.co/MoxoffSpA/Azzurro) have not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of 
responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition 
of the corpus were used to train the base model [mistralai/Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-v0.2), however, it is likely to have included a mix of Web data and technical sources 
like books and code.

## Links to resources

- SQUAD-it dataset: https://huggingface.co/datasets/squad_it
- Mistral_7B_v0.2 original weights: https://models.mistralcdn.com/mistral-7b-v0-2/mistral-7B-v0.2.tar
- Mistral_7B_v0.2 model: https://huggingface.co/alpindale/Mistral-7B-v0.2-hf
- Open Ita LLM Leaderbord: https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard

## Base version

We have the not quantized version here:
https://huggingface.co/MoxoffSpA/Azzurro

## The Moxoff Team

Jacopo Abate, Marco D'Ambra, Luigi Simeone, Gianpaolo Francesco Trotta