ironrock commited on
Commit
27e3ded
·
verified ·
1 Parent(s): bdccd41

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: "trl"
4
+ tags:
5
+ - KTO
6
+ - WeniGPT
7
+ base_model: HuggingFaceH4/zephyr-7b-beta
8
+ model-index:
9
+ - name: Weni/WeniGPT-QA-Zephyr-7B-4.0.2-KTO
10
+ results: []
11
+ language: ['pt']
12
+ ---
13
+
14
+ # Weni/WeniGPT-QA-Zephyr-7B-4.0.2-KTO
15
+
16
+ This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta] on the dataset Weni/WeniGPT-QA-Binarized-1.2.0 with the KTO trainer. It is part of the WeniGPT project for [Weni](https://weni.ai/).
17
+
18
+ It achieves the following results on the evaluation set:
19
+ {'eval_loss': 0.02262546680867672, 'eval_runtime': 33.5353, 'eval_samples_per_second': 13.955, 'eval_steps_per_second': 0.895, 'epoch': 2.91}
20
+
21
+ ## Intended uses & limitations
22
+
23
+ This model has not been trained to avoid specific intructions.
24
+
25
+ ## Training procedure
26
+
27
+ Finetuning was done on the model HuggingFaceH4/zephyr-7b-beta with the following prompt:
28
+
29
+ ```
30
+ ---------------------
31
+ Question:
32
+ <|system|>
33
+ Você é um médico tratando um paciente com amnésia. Para responder as perguntas do paciente, você irá ler um texto anteriormente para se contextualizar. Se você trouxer informações desconhecidas, fora do texto lido, poderá deixar o paciente confuso. Se o paciente fizer uma questão sobre informações não presentes no texto, você precisa responder de forma educada que você não tem informação suficiente para responder, pois se tentar responder, pode trazer informações que não ajudarão o paciente recuperar sua memória. Lembre, se não estiver no texto, você precisa responder de forma educada que você não tem informação suficiente para responder. Precisamos ajudar o paciente.
34
+ <|user|>
35
+ Contexto: {context}
36
+
37
+ Questão: {question}</s>
38
+ <|assistant|>
39
+
40
+
41
+
42
+ ---------------------
43
+ Response:
44
+ {response}</s>
45
+
46
+
47
+ ---------------------
48
+
49
+ ```
50
+
51
+ ### Training hyperparameters
52
+
53
+ The following hyperparameters were used during training:
54
+ - learning_rate: 0.0002
55
+ - per_device_train_batch_size: 4
56
+ - per_device_eval_batch_size: 2
57
+ - gradient_accumulation_steps: 8
58
+ - num_gpus: 8
59
+ - total_train_batch_size: 256
60
+ - optimizer: AdamW
61
+ - lr_scheduler_type: cosine
62
+ - num_steps: 48
63
+ - quantization_type: bitsandbytes
64
+ - LoRA: ("\n - bits: 4\n - use_exllama: True\n - device_map: auto\n - use_cache: False\n - lora_r: 16\n - lora_alpha: 32\n - lora_dropout: 0.05\n - bias: none\n - target_modules: ['q_proj', 'k_proj', 'v_proj', 'o_proj']\n - task_type: CAUSAL_LM",)
65
+
66
+ ### Training results
67
+
68
+ ### Framework versions
69
+
70
+ - transformers==4.39.1
71
+ - datasets==2.18.0
72
+ - peft==0.10.0
73
+ - safetensors==0.4.2
74
+ - evaluate==0.4.1
75
+ - bitsandbytes==0.43
76
+ - huggingface_hub==0.20.3
77
+ - seqeval==1.2.2
78
+ - optimum==1.17.1
79
+ - auto-gptq==0.7.1
80
+ - gpustat==1.1.1
81
+ - deepspeed==0.14.0
82
+ - wandb==0.16.3
83
+ - trl==0.8.1
84
+ - accelerate==0.28.0
85
+ - coloredlogs==15.0.1
86
+ - traitlets==5.14.1
87
+ - autoawq@https://github.com/casper-hansen/AutoAWQ/releases/download/v0.2.0/autoawq-0.2.0+cu118-cp310-cp310-linux_x86_64.whl
88
+
89
+ ### Hardware
90
+ - Cloud provided: runpod.io
special_tokens_map.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<unk>",
4
+ "<s>",
5
+ "</s>"
6
+ ],
7
+ "bos_token": {
8
+ "content": "<s>",
9
+ "lstrip": false,
10
+ "normalized": false,
11
+ "rstrip": false,
12
+ "single_word": false
13
+ },
14
+ "eos_token": {
15
+ "content": "</s>",
16
+ "lstrip": false,
17
+ "normalized": false,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "pad_token": "<unk>",
22
+ "unk_token": {
23
+ "content": "<unk>",
24
+ "lstrip": false,
25
+ "normalized": false,
26
+ "rstrip": false,
27
+ "single_word": false
28
+ }
29
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [
31
+ "<unk>",
32
+ "<s>",
33
+ "</s>"
34
+ ],
35
+ "bos_token": "<s>",
36
+ "chat_template": "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}",
37
+ "clean_up_tokenization_spaces": false,
38
+ "eos_token": "</s>",
39
+ "legacy": true,
40
+ "max_lenght": 8192,
41
+ "model_max_length": 1000000000000000019884624838656,
42
+ "pad_token": "<unk>",
43
+ "padding": true,
44
+ "sp_model_kwargs": {},
45
+ "spaces_between_special_tokens": false,
46
+ "tokenizer_class": "LlamaTokenizer",
47
+ "truncation_side": "left",
48
+ "unk_token": "<unk>",
49
+ "use_default_system_prompt": true
50
+ }