explicitly use slow tokenizer
Browse files
README.md
CHANGED
@@ -30,7 +30,7 @@ You can use this model for masked language modeling as follows:
|
|
30 |
|
31 |
```python
|
32 |
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
33 |
-
tokenizer = AutoTokenizer.from_pretrained("izumi-lab/deberta-v2-small-japanese")
|
34 |
model = AutoModelForMaskedLM.from_pretrained("izumi-lab/deberta-v2-small-japanese")
|
35 |
...
|
36 |
```
|
|
|
30 |
|
31 |
```python
|
32 |
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
33 |
+
tokenizer = AutoTokenizer.from_pretrained("izumi-lab/deberta-v2-small-japanese", use_fast=False)
|
34 |
model = AutoModelForMaskedLM.from_pretrained("izumi-lab/deberta-v2-small-japanese")
|
35 |
...
|
36 |
```
|