andreaskoepf
commited on
Commit
•
fd1ead2
1
Parent(s):
f0b152f
update prompt info
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ tags:
|
|
11 |
- sft
|
12 |
pipeline_tag: text-generation
|
13 |
widget:
|
14 |
-
- text: <|system|>You are an AI assistant.
|
15 |
- text: <|prompter|>What's the Earth total population</s><|assistant|>
|
16 |
- text: <|prompter|>Write a story about future of AI development</s><|assistant|>
|
17 |
---
|
@@ -33,7 +33,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
33 |
tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319", use_fast=False)
|
34 |
model = AutoModelForCausalLM.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
|
35 |
|
36 |
-
system_message = "You are
|
37 |
user_prompt = "Write me a poem please"
|
38 |
prompt = f"""<|system|>{system_message}</s><|prompter|>{user_prompt}</s><|assistant|>"""
|
39 |
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
|
@@ -59,7 +59,7 @@ HF transformers >=4.31.0 is installed (`pip install transformers>=4.31.0`).
|
|
59 |
|
60 |
## Conversation Template
|
61 |
|
62 |
-
For the initial response use (the system
|
63 |
|
64 |
```
|
65 |
<|system|>system message</s><|prompter|>user prompt</s><|assistant|>
|
|
|
11 |
- sft
|
12 |
pipeline_tag: text-generation
|
13 |
widget:
|
14 |
+
- text: <|system|>You are an AI assistant. You will be given a task. You must generate a detailed and long answer.</s><|prompter|>What is a meme, and what's the history behind this word?</s><|assistant|>
|
15 |
- text: <|prompter|>What's the Earth total population</s><|assistant|>
|
16 |
- text: <|prompter|>Write a story about future of AI development</s><|assistant|>
|
17 |
---
|
|
|
33 |
tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319", use_fast=False)
|
34 |
model = AutoModelForCausalLM.from_pretrained("OpenAssistant/llama2-13b-orca-8k-3319", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
|
35 |
|
36 |
+
system_message = "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."
|
37 |
user_prompt = "Write me a poem please"
|
38 |
prompt = f"""<|system|>{system_message}</s><|prompter|>{user_prompt}</s><|assistant|>"""
|
39 |
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
|
|
|
59 |
|
60 |
## Conversation Template
|
61 |
|
62 |
+
For the initial response use (e.g. the [llama2 default system prompt](https://github.com/facebookresearch/llama/blob/6c7fe276574e78057f917549435a2554000a876d/llama/generation.py#L46) works well):
|
63 |
|
64 |
```
|
65 |
<|system|>system message</s><|prompter|>user prompt</s><|assistant|>
|