marcodambra commited on
Commit
ba45ffb
·
verified ·
1 Parent(s): 318249c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -15,13 +15,13 @@ tags:
15
 
16
  XXXXQuantized is a compact iteration of the model [XXXX](https://huggingface.co/MoxoffSpA/xxxx), optimized for efficiency.
17
 
18
- It is offered in two distinct configurations: a 4-bit version and an 8-bit version, each designed to maintain the model's effectiveness while significantly reducing its size.
19
  and computational requirements.
20
 
21
  - It's trained both on publicly available datasets, like [SQUAD-it](https://huggingface.co/datasets/squad_it), and datasets we've created in-house.
22
  - it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
23
- - It is quantized in a 4-bit version and an 8-bit version suing the prcedure [here](https://github.com/ggerganov/llama.cpp).
24
- -
25
  # Evaluation
26
 
27
  We evaluated the model using the same test sets as used for the [Open Ita LLM Leaderboard](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard):
@@ -60,7 +60,7 @@ print(decoded[0])
60
 
61
  ## Bias, Risks and Limitations
62
 
63
- xxxx has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
64
  responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
65
  of the corpus was used to train the base model [mistralai/Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-v0.2), however it is likely to have included a mix of Web data and technical sources
66
  like books and code.
 
15
 
16
  XXXXQuantized is a compact iteration of the model [XXXX](https://huggingface.co/MoxoffSpA/xxxx), optimized for efficiency.
17
 
18
+ It is offered in two distinct configurations: a 4-bit version and an 8-bit version, each designed to maintain the model's effectiveness while significantly reducing its size
19
  and computational requirements.
20
 
21
  - It's trained both on publicly available datasets, like [SQUAD-it](https://huggingface.co/datasets/squad_it), and datasets we've created in-house.
22
  - it's designed to understand and maintain context, making it ideal for Retrieval Augmented Generation (RAG) tasks and applications requiring contextual awareness.
23
+ - It is quantized in a 4-bit version and an 8-bit version folllowing the procedure [here](https://github.com/ggerganov/llama.cpp).
24
+
25
  # Evaluation
26
 
27
  We evaluated the model using the same test sets as used for the [Open Ita LLM Leaderboard](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard):
 
60
 
61
  ## Bias, Risks and Limitations
62
 
63
+ xxxxQuantized and its original model [xxxx](https://huggingface.co/MoxoffSpA/xxxx) has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of
64
  responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition
65
  of the corpus was used to train the base model [mistralai/Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-v0.2), however it is likely to have included a mix of Web data and technical sources
66
  like books and code.