versae commited on
Commit
1b215f5
·
1 Parent(s): cb69a58

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -21,7 +21,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig,
21
 
22
  base_model = "bertin-project/bertin-gpt-j-6B-alpaca"
23
  tokenizer = AutoTokenizer.from_pretrained(base_model)
24
- model = AutoModelForCausalLM.from_pretrained(base_model)
25
  ```
26
 
27
  For generation, we can either use `pipeline()` or the model's `.generate()` method. Remember that the prompt needs a **Spanish** template:
 
21
 
22
  base_model = "bertin-project/bertin-gpt-j-6B-alpaca"
23
  tokenizer = AutoTokenizer.from_pretrained(base_model)
24
+ model = AutoModelForCausalLM.from_pretrained(base_model).cuda()
25
  ```
26
 
27
  For generation, we can either use `pipeline()` or the model's `.generate()` method. Remember that the prompt needs a **Spanish** template: