ngxson HF staff rasmus1610 commited on
Commit
7eb5a40
1 Parent(s): 84e8f3e

Update README.md (#6)

Browse files

- Update README.md (66d50a29cc798d8f30a240656c3a483c4d42f68e)


Co-authored-by: Marius <[email protected]>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -100,7 +100,7 @@ Below are some system and instruct prompts that work well for special tasks
100
  ```python
101
  system_prompt_rewrite = "You are an AI writing assistant. Your task is to rewrite the user's email to make it more professional and approachable while maintaining its main points and key message. Do not return any text other than the rewritten message."
102
  user_prompt_rewrite = "Rewrite the message below to make it more friendly and approachable while maintaining its main points and key message. Do not add any new information or return any text other than the rewritten message\nThe message:"
103
- messages = [{"role": "system", "content": system_prompt_rewrite}, {"role": "user", "content":f"{user_prompt_rewrite} The CI is failing after your last commit!}"]
104
  input_text=tokenizer.apply_chat_template(messages, tokenize=False)
105
  inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
106
  outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
 
100
  ```python
101
  system_prompt_rewrite = "You are an AI writing assistant. Your task is to rewrite the user's email to make it more professional and approachable while maintaining its main points and key message. Do not return any text other than the rewritten message."
102
  user_prompt_rewrite = "Rewrite the message below to make it more friendly and approachable while maintaining its main points and key message. Do not add any new information or return any text other than the rewritten message\nThe message:"
103
+ messages = [{"role": "system", "content": system_prompt_rewrite}, {"role": "user", "content":f"{user_prompt_rewrite} The CI is failing after your last commit!"}]
104
  input_text=tokenizer.apply_chat_template(messages, tokenize=False)
105
  inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
106
  outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)