Update example with torch device
Browse files
README.md
CHANGED
|
@@ -337,26 +337,17 @@ model-index:
|
|
| 337 |
- create the pipeline object:
|
| 338 |
|
| 339 |
```python
|
| 340 |
-
|
|
|
|
| 341 |
from transformers import pipeline
|
| 342 |
|
| 343 |
hf_name = 'pszemraj/led-base-book-summary'
|
| 344 |
|
| 345 |
-
_model = AutoModelForSeq2SeqLM.from_pretrained(
|
| 346 |
-
hf_name,
|
| 347 |
-
low_cpu_mem_usage=True,
|
| 348 |
-
)
|
| 349 |
-
|
| 350 |
-
_tokenizer = AutoTokenizer.from_pretrained(
|
| 351 |
-
hf_name
|
| 352 |
-
)
|
| 353 |
-
|
| 354 |
-
|
| 355 |
summarizer = pipeline(
|
| 356 |
-
|
| 357 |
-
|
| 358 |
-
|
| 359 |
-
|
| 360 |
```
|
| 361 |
|
| 362 |
- put words into the pipeline object:
|
|
|
|
| 337 |
- create the pipeline object:
|
| 338 |
|
| 339 |
```python
|
| 340 |
+
|
| 341 |
+
import torch
|
| 342 |
from transformers import pipeline
|
| 343 |
|
| 344 |
hf_name = 'pszemraj/led-base-book-summary'
|
| 345 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 346 |
summarizer = pipeline(
|
| 347 |
+
"summarization",
|
| 348 |
+
hf_name,
|
| 349 |
+
device=0 if torch.cuda.is_available() else -1,
|
| 350 |
+
)
|
| 351 |
```
|
| 352 |
|
| 353 |
- put words into the pipeline object:
|