Sandiago21
commited on
Commit
•
c11c037
1
Parent(s):
8071e2b
update model card README.md
Browse files
README.md
CHANGED
@@ -1,173 +1,30 @@
|
|
1 |
---
|
2 |
license: other
|
3 |
-
language:
|
4 |
-
- en
|
5 |
-
library_name: transformers
|
6 |
-
pipeline_tag: conversational
|
7 |
tags:
|
8 |
-
-
|
9 |
-
-
|
10 |
-
- prompt
|
11 |
-
|
12 |
---
|
13 |
|
14 |
-
|
|
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
## Model Details
|
22 |
-
|
23 |
-
|
24 |
-
### Model Description
|
25 |
-
|
26 |
-
The decapoda-research/llama-7b-hf model was finetuned on conversations and question answering prompts.
|
27 |
-
|
28 |
-
**Developed by:** [More Information Needed]
|
29 |
-
|
30 |
-
**Shared by:** [More Information Needed]
|
31 |
-
|
32 |
-
**Model type:** Causal LM
|
33 |
-
|
34 |
-
**Language(s) (NLP):** English, multilingual
|
35 |
-
|
36 |
-
**License:** Research
|
37 |
-
|
38 |
-
**Finetuned from model:** decapoda-research/llama-7b-hf
|
39 |
-
|
40 |
-
|
41 |
-
## Model Sources [optional]
|
42 |
-
|
43 |
-
**Repository:** [More Information Needed]
|
44 |
-
**Paper:** [More Information Needed]
|
45 |
-
**Demo:** [More Information Needed]
|
46 |
-
|
47 |
-
## Uses
|
48 |
-
|
49 |
-
The model can be used for prompt answering
|
50 |
-
|
51 |
-
|
52 |
-
### Direct Use
|
53 |
-
|
54 |
-
The model can be used for prompt answering
|
55 |
-
|
56 |
-
|
57 |
-
### Downstream Use
|
58 |
-
|
59 |
-
Generating text and prompt answering
|
60 |
-
|
61 |
-
|
62 |
-
## Recommendations
|
63 |
-
|
64 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
65 |
-
|
66 |
-
|
67 |
-
# Usage
|
68 |
-
|
69 |
-
## Creating prompt
|
70 |
-
|
71 |
-
The model was trained on the following kind of prompt:
|
72 |
-
|
73 |
-
```python
|
74 |
-
def generate_prompt(instruction: str, input_ctxt: str = None) -> str:
|
75 |
-
if input_ctxt:
|
76 |
-
return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
77 |
-
|
78 |
-
### Instruction:
|
79 |
-
{instruction}
|
80 |
-
|
81 |
-
### Input:
|
82 |
-
{input_ctxt}
|
83 |
-
|
84 |
-
### Response:"""
|
85 |
-
else:
|
86 |
-
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
87 |
-
|
88 |
-
### Instruction:
|
89 |
-
{instruction}
|
90 |
-
|
91 |
-
### Response:"""
|
92 |
-
```
|
93 |
-
|
94 |
-
## How to Get Started with the Model
|
95 |
-
|
96 |
-
Use the code below to get started with the model.
|
97 |
-
|
98 |
-
```python
|
99 |
-
from transformers import LlamaTokenizer, LlamaForCausalLM
|
100 |
-
from peft import PeftModel
|
101 |
-
|
102 |
-
MODEL_NAME = "decapoda-research/llama-7b-hf"
|
103 |
-
tokenizer = LlamaTokenizer.from_pretrained(MODEL_NAME, add_eos_token=True)
|
104 |
-
tokenizer.pad_token_id = 0
|
105 |
-
|
106 |
-
model = LlamaForCausalLM.from_pretrained(MODEL_NAME, load_in_8bit=True, device_map="auto")
|
107 |
-
model = PeftModel.from_pretrained(model, "Sandiago21/llama-7b-hf")
|
108 |
-
```
|
109 |
-
|
110 |
-
### Example of Usage
|
111 |
-
```python
|
112 |
-
from transformers import GenerationConfig
|
113 |
-
|
114 |
-
PROMPT = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\nWhich is the capital city of Greece and with which countries does Greece border?\n\n### Input:\nQuestion answering\n\n### Response:\n"""
|
115 |
-
DEVICE = "cuda"
|
116 |
-
|
117 |
-
inputs = tokenizer(
|
118 |
-
PROMPT,
|
119 |
-
return_tensors="pt",
|
120 |
-
)
|
121 |
-
|
122 |
-
input_ids = inputs["input_ids"].to(DEVICE)
|
123 |
-
|
124 |
-
generation_config = GenerationConfig(
|
125 |
-
temperature=0.1,
|
126 |
-
top_p=0.95,
|
127 |
-
repetition_penalty=1.2,
|
128 |
-
)
|
129 |
-
|
130 |
-
print("Generating Response ... ")
|
131 |
-
with torch.no_grad():
|
132 |
-
generation_output = model.generate(
|
133 |
-
input_ids=input_ids,
|
134 |
-
generation_config=generation_config,
|
135 |
-
return_dict_in_generate=True,
|
136 |
-
output_scores=True,
|
137 |
-
max_new_tokens=256,
|
138 |
-
)
|
139 |
-
|
140 |
-
for s in generation_output.sequences:
|
141 |
-
print(tokenizer.decode(s))
|
142 |
-
```
|
143 |
-
|
144 |
-
### Example Output
|
145 |
-
```python
|
146 |
-
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
147 |
-
|
148 |
-
### Instruction:
|
149 |
-
Which is the capital city of Greece and with which countries does Greece border?
|
150 |
-
|
151 |
-
### Input:
|
152 |
-
Question answering
|
153 |
|
154 |
-
|
155 |
|
156 |
-
|
157 |
-
Generating...
|
158 |
-
<unk> Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
159 |
|
160 |
-
|
161 |
-
Which is the capital city of Greece and with which countries does Greece border?
|
162 |
|
163 |
-
|
164 |
-
Question answering
|
165 |
|
166 |
-
|
167 |
-
<unk>capital city of Athens and it borders Albania to the northwest, North Macedonia and Bulgaria to the northeast, Turkey to the east, and Libya to the southeast across the Mediterranean Sea.
|
168 |
-
```
|
169 |
|
170 |
-
|
171 |
|
172 |
## Training procedure
|
173 |
|
@@ -192,19 +49,3 @@ The following hyperparameters were used during training:
|
|
192 |
- Pytorch 2.0.0+cu117
|
193 |
- Datasets 2.12.0
|
194 |
- Tokenizers 0.12.1
|
195 |
-
|
196 |
-
### Training Data
|
197 |
-
|
198 |
-
The decapoda-research/llama-7b-hf was finetuned on conversations and question answering data
|
199 |
-
|
200 |
-
|
201 |
-
### Training Procedure
|
202 |
-
|
203 |
-
The decapoda-research/llama-7b-hf model was further trained and finetuned on question answering and prompts data for 1 epoch (approximately 10 hours of training on a single GPU)
|
204 |
-
|
205 |
-
|
206 |
-
## Model Architecture and Objective
|
207 |
-
|
208 |
-
The model is based on decapoda-research/llama-7b-hf model and finetuned adapters on top of the main model on conversations and question answering data.
|
209 |
-
This model is a fine-tuned version of [chainyo/alpaca-lora-7b](https://huggingface.co/chainyo/alpaca-lora-7b) on an unknown dataset.
|
210 |
-
|
|
|
1 |
---
|
2 |
license: other
|
|
|
|
|
|
|
|
|
3 |
tags:
|
4 |
+
- generated_from_trainer
|
5 |
+
model-index:
|
6 |
+
- name: llama-7b-hf-prompt-answering
|
7 |
+
results: []
|
8 |
---
|
9 |
|
10 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
11 |
+
should probably proofread and complete it, then remove this comment. -->
|
12 |
|
13 |
+
# llama-7b-hf-prompt-answering
|
14 |
|
15 |
+
This model is a fine-tuned version of [chainyo/alpaca-lora-7b](https://huggingface.co/chainyo/alpaca-lora-7b) on an unknown dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
+
## Model description
|
18 |
|
19 |
+
More information needed
|
|
|
|
|
20 |
|
21 |
+
## Intended uses & limitations
|
|
|
22 |
|
23 |
+
More information needed
|
|
|
24 |
|
25 |
+
## Training and evaluation data
|
|
|
|
|
26 |
|
27 |
+
More information needed
|
28 |
|
29 |
## Training procedure
|
30 |
|
|
|
49 |
- Pytorch 2.0.0+cu117
|
50 |
- Datasets 2.12.0
|
51 |
- Tokenizers 0.12.1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|