Update README.md
Browse files
README.md
CHANGED
@@ -23,16 +23,16 @@ This repository contains fine-tuned TensorFlow and Safetensors weights of VBART
|
|
23 |
- **Model type:** Transformer encoder-decoder based on mBART architecture
|
24 |
- **Language(s) (NLP):** Turkish
|
25 |
- **License:** CC BY-NC-SA 4.0
|
26 |
-
- **Finetuned from:** VBART-
|
27 |
- **Paper:** [arXiv](https://arxiv.org/abs/2403.01308)
|
28 |
## How to Get Started with the Model
|
29 |
```python
|
30 |
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
31 |
|
32 |
-
tokenizer = AutoTokenizer.from_pretrained("vngrs-ai/VBART-
|
33 |
model_input_names=['input_ids', 'attention_mask'])
|
34 |
# Uncomment the device_map kwarg and delete the closing bracket to use model for inference on GPU
|
35 |
-
model = AutoModelForSeq2SeqLM.from_pretrained("vngrs-ai/VBART-
|
36 |
|
37 |
input_text="..."
|
38 |
|
|
|
23 |
- **Model type:** Transformer encoder-decoder based on mBART architecture
|
24 |
- **Language(s) (NLP):** Turkish
|
25 |
- **License:** CC BY-NC-SA 4.0
|
26 |
+
- **Finetuned from:** VBART-XLarge
|
27 |
- **Paper:** [arXiv](https://arxiv.org/abs/2403.01308)
|
28 |
## How to Get Started with the Model
|
29 |
```python
|
30 |
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
|
31 |
|
32 |
+
tokenizer = AutoTokenizer.from_pretrained("vngrs-ai/VBART-XLarge-Title-Generation-from-News",
|
33 |
model_input_names=['input_ids', 'attention_mask'])
|
34 |
# Uncomment the device_map kwarg and delete the closing bracket to use model for inference on GPU
|
35 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("vngrs-ai/VBART-XLarge-Title-Generation-from-News")#, device_map="auto")
|
36 |
|
37 |
input_text="..."
|
38 |
|