Matttttttt commited on
Commit
279848f
1 Parent(s): 114102e

update README

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -18,8 +18,8 @@ This is a Japanese BART base model pre-trained on Japanese Wikipedia.
18
  You can use this model as follows:
19
 
20
  ```python
21
- from transformers import XLMRobertaTokenizer, MBartForConditionalGeneration
22
- tokenizer = XLMRobertaTokenizer.from_pretrained('ku-nlp/bart-base-japanese')
23
  model = MBartForConditionalGeneration.from_pretrained('ku-nlp/bart-base-japanese')
24
  sentence = '京都 大学 で 自然 言語 処理 を 専攻 する 。' # input should be segmented into words by Juman++ in advance
25
  encoding = tokenizer(sentence, return_tensors='pt')
 
18
  You can use this model as follows:
19
 
20
  ```python
21
+ from transformers import AutoTokenizer, MBartForConditionalGeneration
22
+ tokenizer = AutoTokenizer.from_pretrained('ku-nlp/bart-base-japanese')
23
  model = MBartForConditionalGeneration.from_pretrained('ku-nlp/bart-base-japanese')
24
  sentence = '京都 大学 で 自然 言語 処理 を 専攻 する 。' # input should be segmented into words by Juman++ in advance
25
  encoding = tokenizer(sentence, return_tensors='pt')