Matttttttt
commited on
Commit
•
ed292ce
1
Parent(s):
f711924
Update README.md
Browse files
README.md
CHANGED
@@ -16,9 +16,9 @@ This is a Japanese BART V2 base model pre-trained on Japanese Wikipedia.
|
|
16 |
You can use this model as follows:
|
17 |
|
18 |
```python
|
19 |
-
from transformers import
|
20 |
-
tokenizer =
|
21 |
-
model =
|
22 |
sentence = '京都 大学 で 自然 言語 処理 を 専攻 する 。' # input should be segmented into words by Juman++ in advance
|
23 |
encoding = tokenizer(sentence, return_tensors='pt')
|
24 |
...
|
|
|
16 |
You can use this model as follows:
|
17 |
|
18 |
```python
|
19 |
+
from transformers import XLMRobertaTokenizer, MBartForConditionalGeneration
|
20 |
+
tokenizer = XLMRobertaTokenizer.from_pretrained('ku-nlp/bart-v2-base-japanese')
|
21 |
+
model = MBartForConditionalGeneration.from_pretrained('ku-nlp/bart-v2-base-japanese/')
|
22 |
sentence = '京都 大学 で 自然 言語 処理 を 専攻 する 。' # input should be segmented into words by Juman++ in advance
|
23 |
encoding = tokenizer(sentence, return_tensors='pt')
|
24 |
...
|