Update README.md
Browse files
README.md
CHANGED
@@ -27,9 +27,9 @@ Ladybird-base-7B-v8 is based on the Mistral architecture, which is known for its
|
|
27 |
|
28 |
## Instruction Format
|
29 |
|
30 |
-
To fully leverage the capabilities of Ladybird-base-7B-v8, especially its instruction fine-tuning feature, users are advised to follow ChatML
|
31 |
|
32 |
-
|
33 |
msg = [
|
34 |
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
|
35 |
{"role": "assistant", "content": "You are a friendly chatbot who always responds in the style of a pirate"},
|
@@ -37,7 +37,7 @@ msg = [
|
|
37 |
|
38 |
prompt = pipe.tokenizer.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
|
39 |
|
40 |
-
|
41 |
|
42 |
|
43 |
|
|
|
27 |
|
28 |
## Instruction Format
|
29 |
|
30 |
+
To fully leverage the capabilities of Ladybird-base-7B-v8, especially its instruction fine-tuning feature, users are advised to follow [ChatML](https://huggingface.co/docs/transformers/main/en/chat_templating) format. This format ensures that prompts are effectively processed, resulting in accurate and context-aware responses from the model. Here's how to construct your prompts:
|
31 |
|
32 |
+
```python
|
33 |
msg = [
|
34 |
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
|
35 |
{"role": "assistant", "content": "You are a friendly chatbot who always responds in the style of a pirate"},
|
|
|
37 |
|
38 |
prompt = pipe.tokenizer.apply_chat_template(msg, tokenize=False, add_generation_prompt=True)
|
39 |
|
40 |
+
```
|
41 |
|
42 |
|
43 |
|