wangrongsheng commited on
Commit
e85eeb7
1 Parent(s): 7b78411

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```python
2
+ from transformers import AutoModelForCausalLM, AutoTokenizer
3
+
4
+ model_name = "wangrongsheng/DPDG-Qwen2-7B-lora" # [wangrongsheng/DPDG-Qwen2-7B-lora]
5
+ device = "cuda" # the device to load the model onto
6
+
7
+ model = AutoModelForCausalLM.from_pretrained(
8
+ model_name,
9
+ torch_dtype="auto",
10
+ device_map="auto"
11
+ )
12
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
13
+
14
+ ins = """
15
+ 请根据给定的提示生成两种不同质量的回答。第一种回答应该是高质量的、令人满意的答案,代表"chosen"的选项。第二种回答则应该是低质量的、不太理想的答案,代表"rejected"的选项。\n
16
+ 在生成这两个回答时,请注意以下事项:\n
17
+ 1. "chosen" 回复应具有实质性内容、流畅的表达,并能够完整回答提示中提出的问题或要求。\n
18
+ 2. "rejected" 回复可能存在一些问题,例如逻辑不连贯、信息不完整或表达不清晰。但请确保它仍然是一个可以大致理解的回复,而不是完全无关或毫无意义的内容。\n
19
+ 3. 这两个回复的长度应该大致相当,而不是差异极大。\n
20
+ 4. 请确保在"chosen"回复和"rejected"回复之间反映出明显的质量差异,使区别显而易见。\n
21
+ 请根据这些指导方针为给定的提示生成一个"chosen"的回应和一个"rejected"的回应。这将有助于训练奖励模型以区分高质量和低质量的回应。\n
22
+ 提示是:
23
+ """
24
+ prompt = "什么是ACI fabric中的叶脊拓扑结构?"
25
+ messages = [
26
+ {"role": "system", "content": ins},
27
+ {"role": "user", "content": prompt}
28
+ ]
29
+ text = tokenizer.apply_chat_template(
30
+ messages,
31
+ tokenize=False,
32
+ add_generation_prompt=True
33
+ )
34
+ model_inputs = tokenizer([text], return_tensors="pt").to(device)
35
+
36
+ generated_ids = model.generate(
37
+ **model_inputs,
38
+ max_new_tokens=1024
39
+ )
40
+ generated_ids = [
41
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
42
+ ]
43
+
44
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
45
+
46
+ print(response)
47
+ ```