wangrongsheng
commited on
Commit
•
334bc7d
1
Parent(s):
ac7bdc3
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
```python
|
2 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
3 |
+
|
4 |
+
model_id = "wangrongsheng/DPDG-Llama-8B-lora" # [wangrongsheng/DPDG-Llama-8B-qlora, wangrongsheng/DPDG-Llama-8B-lora]
|
5 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
6 |
+
model = AutoModelForCausalLM.from_pretrained(
|
7 |
+
model_id, torch_dtype="auto", device_map="auto"
|
8 |
+
)
|
9 |
+
|
10 |
+
prompt = "\nPlease generate two different quality responses based on the given prompt. The first response should be a high-quality, satisfactory answer, representing the \"chosen\" option. The second response should be a low-quality, less ideal answer, representing the \"rejected\" option.\n\nWhen generating these two responses, please note the following:\n\n1. The \"chosen\" response should have substantive content, fluent expression, and be able to fully answer the question or requirement posed in the prompt.\n\n2. The \"rejected\" response can have some issues, such as illogical flow, incomplete information, or unclear expression. But ensure that it is still a response that can be loosely understood, not completely irrelevant or meaningless content.\n\n3. The lengths of the two responses should be roughly comparable, not vastly different.\n\n4. Try to reflect a clear difference in quality between the \"chosen\" response and the \"rejected\" response, so that the distinction is evident.\n\nPlease generate a \"chosen\" response and a \"rejected\" response for the given prompt according to these guidelines. This will help train the reward model to distinguish high-quality and low-quality responses.\n\nThe prompt is:\nIf an electric train is traveling south, which way is the smoke going?"
|
11 |
+
|
12 |
+
messages = [
|
13 |
+
{"role": "user", "content": prompt},
|
14 |
+
]
|
15 |
+
|
16 |
+
input_ids = tokenizer.apply_chat_template(
|
17 |
+
messages, add_generation_prompt=True, return_tensors="pt"
|
18 |
+
).to(model.device)
|
19 |
+
|
20 |
+
outputs = model.generate(
|
21 |
+
input_ids,
|
22 |
+
max_new_tokens=8192,
|
23 |
+
do_sample=True,
|
24 |
+
temperature=0.6,
|
25 |
+
top_p=0.9,
|
26 |
+
)
|
27 |
+
|
28 |
+
response = outputs[0][input_ids.shape[-1]:]
|
29 |
+
print(tokenizer.decode(response, skip_special_tokens=True))
|
30 |
+
|
31 |
+
#Output:
|
32 |
+
#[chosen]
|
33 |
+
#The smoke is going up. Trains do not produce smoke, they produce steam or exhaust gas, which rises upwards. Since the train is traveling south, the smoke is going upwards.
|
34 |
+
#[rejected]
|
35 |
+
#It is going south.
|
36 |
+
```
|