ArmelR commited on
Commit
ee26113
1 Parent(s): 9c1178b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -49,8 +49,8 @@ We stopped the training before the end and kept the *checkpoint-100* for the sec
49
  This step consisted into the instruction fine-tuning of the previous checkpoint. For that purpose, we used a modified version of [openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
50
  The template for the instruction fine-tuning was `Question: {question}\n\nAnswer: {answer}`. We used exactly the same parameters we used during the pretraining and we kept the *checkpoint-50*.
51
 
52
- ## Usage
53
- The usage is straightforward an very similar to any other instruction fine-tuned model
54
 
55
  ```python
56
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
49
  This step consisted into the instruction fine-tuning of the previous checkpoint. For that purpose, we used a modified version of [openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
50
  The template for the instruction fine-tuning was `Question: {question}\n\nAnswer: {answer}`. We used exactly the same parameters we used during the pretraining and we kept the *checkpoint-50*.
51
 
52
+ # Usage
53
+ The usage is straightforward and very similar to any other instruction fine-tuned model.
54
 
55
  ```python
56
  from transformers import AutoModelForCausalLM, AutoTokenizer