Upload folder using huggingface_hub
Browse files- README.md +85 -0
- added_tokens.json +4 -0
- special_tokens_map.json +28 -0
- tokenizer.json +0 -0
- tokenizer.model +3 -0
- tokenizer_config.json +65 -0
README.md
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
library_name: "trl"
|
4 |
+
tags:
|
5 |
+
- SFT
|
6 |
+
- WeniGPT
|
7 |
+
base_model: HuggingFaceH4/zephyr-7b-beta
|
8 |
+
model-index:
|
9 |
+
- name: Weni/WeniGPT-QA-Zephyr-7B-3.0.2-SFT
|
10 |
+
results: []
|
11 |
+
language: ['pt']
|
12 |
+
---
|
13 |
+
|
14 |
+
# Weni/WeniGPT-QA-Zephyr-7B-3.0.2-SFT
|
15 |
+
|
16 |
+
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta] on the dataset Weni/WeniGPT-QA-Binarized-1.2.0 with the SFT trainer. It is part of the WeniGPT project for [Weni](https://weni.ai/).
|
17 |
+
|
18 |
+
It achieves the following results on the evaluation set:
|
19 |
+
{'eval_loss': 1.2787697315216064, 'eval_runtime': 97.2566, 'eval_samples_per_second': 2.406, 'eval_steps_per_second': 0.308, 'epoch': 2.92}
|
20 |
+
|
21 |
+
## Intended uses & limitations
|
22 |
+
|
23 |
+
This model has not been trained to avoid specific intructions.
|
24 |
+
|
25 |
+
## Training procedure
|
26 |
+
|
27 |
+
Finetuning was done on the model HuggingFaceH4/zephyr-7b-beta with the following prompt:
|
28 |
+
|
29 |
+
```
|
30 |
+
---------------------
|
31 |
+
Portuguese:
|
32 |
+
<|system|>
|
33 |
+
Você é um médico tratando um paciente com amnésia. Para responder as perguntas do paciente, você irá ler um texto anteriormente para se contextualizar. Se você trouxer informações desconhecidas, fora do texto lido, poderá deixar o paciente confuso. Se o paciente fizer uma questão sobre informações não presentes no texto, você precisa responder de forma educada que você não tem informação suficiente para responder, pois se tentar responder, pode trazer informações que não ajudarão o paciente recuperar sua memória. Lembre, se não estiver no texto, você precisa responder de forma educada que você não tem informação suficiente para responder. Precisamos ajudar o paciente.
|
34 |
+
|
35 |
+
Contexto: {context}</s>
|
36 |
+
<|user|>
|
37 |
+
{question}</s>
|
38 |
+
<|assistant|>
|
39 |
+
{chosen_response}</s>
|
40 |
+
|
41 |
+
|
42 |
+
---------------------
|
43 |
+
|
44 |
+
```
|
45 |
+
|
46 |
+
### Training hyperparameters
|
47 |
+
|
48 |
+
The following hyperparameters were used during training:
|
49 |
+
- learning_rate: 0.0002
|
50 |
+
- per_device_train_batch_size: 2
|
51 |
+
- per_device_eval_batch_size: 2
|
52 |
+
- gradient_accumulation_steps: 8
|
53 |
+
- num_gpus: 4
|
54 |
+
- total_train_batch_size: 64
|
55 |
+
- optimizer: AdamW
|
56 |
+
- lr_scheduler_type: cosine
|
57 |
+
- num_steps: 96
|
58 |
+
- quantization_type: bitsandbytes
|
59 |
+
- LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 16\n - lora_alpha: 32\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']\n - task_type: CAUSAL_LM",)
|
60 |
+
|
61 |
+
### Training results
|
62 |
+
|
63 |
+
### Framework versions
|
64 |
+
|
65 |
+
- transformers==4.39.1
|
66 |
+
- datasets==2.18.0
|
67 |
+
- peft==0.10.0
|
68 |
+
- safetensors==0.4.2
|
69 |
+
- evaluate==0.4.1
|
70 |
+
- bitsandbytes==0.43
|
71 |
+
- huggingface_hub==0.20.3
|
72 |
+
- seqeval==1.2.2
|
73 |
+
- optimum==1.17.1
|
74 |
+
- auto-gptq==0.7.1
|
75 |
+
- gpustat==1.1.1
|
76 |
+
- deepspeed==0.14.0
|
77 |
+
- wandb==0.16.3
|
78 |
+
- trl==0.8.1
|
79 |
+
- accelerate==0.28.0
|
80 |
+
- coloredlogs==15.0.1
|
81 |
+
- traitlets==5.14.1
|
82 |
+
- autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.0/autoawq-0.2.0+cu118-cp310-cp310-linux_x86_64.whl
|
83 |
+
|
84 |
+
### Hardware
|
85 |
+
- Cloud provided: runpod.io
|
added_tokens.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"<|im_end|>": 32001,
|
3 |
+
"<|im_start|>": 32000
|
4 |
+
}
|
special_tokens_map.json
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"additional_special_tokens": [
|
3 |
+
{
|
4 |
+
"content": "<|im_start|>",
|
5 |
+
"lstrip": false,
|
6 |
+
"normalized": false,
|
7 |
+
"rstrip": false,
|
8 |
+
"single_word": false
|
9 |
+
},
|
10 |
+
{
|
11 |
+
"content": "<|im_end|>",
|
12 |
+
"lstrip": false,
|
13 |
+
"normalized": false,
|
14 |
+
"rstrip": false,
|
15 |
+
"single_word": false
|
16 |
+
}
|
17 |
+
],
|
18 |
+
"bos_token": "<|im_start|>",
|
19 |
+
"eos_token": "<|im_end|>",
|
20 |
+
"pad_token": "<|im_end|>",
|
21 |
+
"unk_token": {
|
22 |
+
"content": "<unk>",
|
23 |
+
"lstrip": false,
|
24 |
+
"normalized": false,
|
25 |
+
"rstrip": false,
|
26 |
+
"single_word": false
|
27 |
+
}
|
28 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
|
3 |
+
size 493443
|
tokenizer_config.json
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_bos_token": true,
|
3 |
+
"add_eos_token": false,
|
4 |
+
"added_tokens_decoder": {
|
5 |
+
"0": {
|
6 |
+
"content": "<unk>",
|
7 |
+
"lstrip": false,
|
8 |
+
"normalized": false,
|
9 |
+
"rstrip": false,
|
10 |
+
"single_word": false,
|
11 |
+
"special": true
|
12 |
+
},
|
13 |
+
"1": {
|
14 |
+
"content": "<s>",
|
15 |
+
"lstrip": false,
|
16 |
+
"normalized": false,
|
17 |
+
"rstrip": false,
|
18 |
+
"single_word": false,
|
19 |
+
"special": true
|
20 |
+
},
|
21 |
+
"2": {
|
22 |
+
"content": "</s>",
|
23 |
+
"lstrip": false,
|
24 |
+
"normalized": false,
|
25 |
+
"rstrip": false,
|
26 |
+
"single_word": false,
|
27 |
+
"special": true
|
28 |
+
},
|
29 |
+
"32000": {
|
30 |
+
"content": "<|im_start|>",
|
31 |
+
"lstrip": false,
|
32 |
+
"normalized": false,
|
33 |
+
"rstrip": false,
|
34 |
+
"single_word": false,
|
35 |
+
"special": true
|
36 |
+
},
|
37 |
+
"32001": {
|
38 |
+
"content": "<|im_end|>",
|
39 |
+
"lstrip": false,
|
40 |
+
"normalized": false,
|
41 |
+
"rstrip": false,
|
42 |
+
"single_word": false,
|
43 |
+
"special": true
|
44 |
+
}
|
45 |
+
},
|
46 |
+
"additional_special_tokens": [
|
47 |
+
"<|im_start|>",
|
48 |
+
"<|im_end|>"
|
49 |
+
],
|
50 |
+
"bos_token": "<|im_start|>",
|
51 |
+
"chat_template": "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
|
52 |
+
"clean_up_tokenization_spaces": false,
|
53 |
+
"eos_token": "<|im_end|>",
|
54 |
+
"legacy": true,
|
55 |
+
"max_lenght": 8192,
|
56 |
+
"model_max_length": 1000000000000000019884624838656,
|
57 |
+
"pad_token": "<|im_end|>",
|
58 |
+
"padding": true,
|
59 |
+
"sp_model_kwargs": {},
|
60 |
+
"spaces_between_special_tokens": false,
|
61 |
+
"tokenizer_class": "LlamaTokenizer",
|
62 |
+
"truncation_side": "left",
|
63 |
+
"unk_token": "<unk>",
|
64 |
+
"use_default_system_prompt": true
|
65 |
+
}
|