add chattemplate in tokenizer_config.json
Browse files
README.md
CHANGED
@@ -66,7 +66,7 @@ Compare our this model with other models on different public safety testsets usi
|
|
66 |
|
67 |
Since we have added the chat_template in `tokenzier_config.json`, you can direct use our model without complicated chat_template.
|
68 |
|
69 |
-
Here is the [VLLM](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html) usage
|
70 |
|
71 |
```python
|
72 |
from transformers import AutoTokenizer
|
@@ -88,7 +88,7 @@ output = llm.generate(prompt, sampling_params=SamplingParams(max_tokens=256))
|
|
88 |
print(output[0]['outputs'][0].text.strip())
|
89 |
```
|
90 |
|
91 |
-
Here is the [Transformer](https://github.com/huggingface/transformers) usage
|
92 |
|
93 |
```python
|
94 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
66 |
|
67 |
Since we have added the chat_template in `tokenzier_config.json`, you can direct use our model without complicated chat_template.
|
68 |
|
69 |
+
Here is the [VLLM](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html) usage example:
|
70 |
|
71 |
```python
|
72 |
from transformers import AutoTokenizer
|
|
|
88 |
print(output[0]['outputs'][0].text.strip())
|
89 |
```
|
90 |
|
91 |
+
Here is the [Transformer](https://github.com/huggingface/transformers) usage example:
|
92 |
|
93 |
```python
|
94 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|