Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig,
|
|
21 |
|
22 |
base_model = "bertin-project/bertin-gpt-j-6B-alpaca"
|
23 |
tokenizer = AutoTokenizer.from_pretrained(base_model)
|
24 |
-
model = AutoModelForCausalLM.from_pretrained(base_model)
|
25 |
```
|
26 |
|
27 |
For generation, we can either use `pipeline()` or the model's `.generate()` method. Remember that the prompt needs a **Spanish** template:
|
|
|
21 |
|
22 |
base_model = "bertin-project/bertin-gpt-j-6B-alpaca"
|
23 |
tokenizer = AutoTokenizer.from_pretrained(base_model)
|
24 |
+
model = AutoModelForCausalLM.from_pretrained(base_model).cuda()
|
25 |
```
|
26 |
|
27 |
For generation, we can either use `pipeline()` or the model's `.generate()` method. Remember that the prompt needs a **Spanish** template:
|