Update README.md
Browse files
README.md
CHANGED
@@ -49,8 +49,8 @@ We stopped the training before the end and kept the *checkpoint-100* for the sec
|
|
49 |
This step consisted into the instruction fine-tuning of the previous checkpoint. For that purpose, we used a modified version of [openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
|
50 |
The template for the instruction fine-tuning was `Question: {question}\n\nAnswer: {answer}`. We used exactly the same parameters we used during the pretraining and we kept the *checkpoint-50*.
|
51 |
|
52 |
-
|
53 |
-
The usage is straightforward
|
54 |
|
55 |
```python
|
56 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
49 |
This step consisted into the instruction fine-tuning of the previous checkpoint. For that purpose, we used a modified version of [openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
|
50 |
The template for the instruction fine-tuning was `Question: {question}\n\nAnswer: {answer}`. We used exactly the same parameters we used during the pretraining and we kept the *checkpoint-50*.
|
51 |
|
52 |
+
# Usage
|
53 |
+
The usage is straightforward and very similar to any other instruction fine-tuned model.
|
54 |
|
55 |
```python
|
56 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|